2026-03-10T05:47:35.256 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T05:47:35.261 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T05:47:35.284 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} email: null first_in_suite: false flavor: default job_id: '920' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN - mons down - mon down - out of quorum - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - - host.b - mon.b - mgr.b - osd.1 - - host.c - mon.c - osd.2 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN8PIgYIn/a7YzMZesuRhJOUvhXGBHB+DpL0nwcxxEEHTZJAungnwxZB70lnrVyvw1flLusdH7W9MOvQfBZWEnw= vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCIMDRGf/dv+8RsfGv98etFEw43SF0Eby2Asv7GFnj3VFW0c8ssRp5jYQ92xRCk/+IDe5HdH6IlFRRjmA1COP0M= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIS0tRNFErYxlJ2K5DgCYRTBbq8C0MNDu3VIO8wwcPTcTMfWjmcUiryNkOgIKdbquzr515cW/e5Aav1VSzfKYpI= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: null - cephadm.shell: host.a: - "set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph\ \ orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch\ \ ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type\ \ mon -f json | jq -r 'last | .daemon_name')\nGRAFANA_HOST=$(ceph orch ps --daemon-type\ \ grafana -f json | jq -e '.[]' | jq -r '.hostname')\nPROM_HOST=$(ceph orch\ \ ps --daemon-type prometheus -f json | jq -e '.[]' | jq -r '.hostname')\nALERTM_HOST=$(ceph\ \ orch ps --daemon-type alertmanager -f json | jq -e '.[]' | jq -r '.hostname')\n\ GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST \"$GRAFANA_HOST\"\ \ '.[] | select(.hostname==$GRAFANA_HOST) | .addr')\nPROM_IP=$(ceph orch host\ \ ls -f json | jq -r --arg PROM_HOST \"$PROM_HOST\" '.[] | select(.hostname==$PROM_HOST)\ \ | .addr')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST\ \ \"$ALERTM_HOST\" '.[] | select(.hostname==$ALERTM_HOST) | .addr')\n# check\ \ each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph\ \ orch host ls -f json | jq -r '.[] | .addr')\nfor ip in $ALL_HOST_IPS; do\n\ \ curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive\ \ and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\n\ curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '.database == \"ok\"\ '\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\n\ sleep 120\n# check prometheus endpoints are responsive and mon down alert is\ \ firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ \ | jq -e '.status == \"success\"'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\n\ curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '.data | .alerts | .[]\ \ | select(.labels | .alertname == \"CephMonDown\") | .state == \"firing\"'\n\ # check alertmanager endpoints are responsive and mon down alert is active\n\ curl -s http://${ALERTM_IP}:9093/api/v2/status\ncurl -s http://${ALERTM_IP}:9093/api/v2/alerts\n\ curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '.[] | select(.labels\ \ | .alertname == \"CephMonDown\") | .status | .state == \"active\"'\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T05:47:35.284 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T05:47:35.285 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T05:47:35.285 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T05:47:35.285 INFO:teuthology.task.internal:Checking packages... 2026-03-10T05:47:35.285 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T05:47:35.285 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T05:47:35.285 INFO:teuthology.packaging:ref: None 2026-03-10T05:47:35.285 INFO:teuthology.packaging:tag: None 2026-03-10T05:47:35.285 INFO:teuthology.packaging:branch: squid 2026-03-10T05:47:35.285 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:47:35.285 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T05:47:36.072 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T05:47:36.073 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T05:47:36.074 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T05:47:36.074 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T05:47:36.075 INFO:teuthology.task.internal:Saving configuration 2026-03-10T05:47:36.079 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T05:47:36.080 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T05:47:36.087 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 05:46:02.236537', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN8PIgYIn/a7YzMZesuRhJOUvhXGBHB+DpL0nwcxxEEHTZJAungnwxZB70lnrVyvw1flLusdH7W9MOvQfBZWEnw='} 2026-03-10T05:47:36.093 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 05:46:02.236331', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCIMDRGf/dv+8RsfGv98etFEw43SF0Eby2Asv7GFnj3VFW0c8ssRp5jYQ92xRCk/+IDe5HdH6IlFRRjmA1COP0M='} 2026-03-10T05:47:36.098 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 05:46:02.235570', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIS0tRNFErYxlJ2K5DgCYRTBbq8C0MNDu3VIO8wwcPTcTMfWjmcUiryNkOgIKdbquzr515cW/e5Aav1VSzfKYpI='} 2026-03-10T05:47:36.098 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T05:47:36.099 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0'] 2026-03-10T05:47:36.099 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1'] 2026-03-10T05:47:36.099 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.c', 'mon.c', 'osd.2'] 2026-03-10T05:47:36.099 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T05:47:36.105 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-10T05:47:36.110 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-10T05:47:36.115 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-10T05:47:36.115 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f1963cf3eb0>, signals=[15]) 2026-03-10T05:47:36.115 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T05:47:36.116 INFO:teuthology.task.internal:Opening connections... 2026-03-10T05:47:36.116 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-10T05:47:36.116 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:47:36.175 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-10T05:47:36.176 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:47:36.238 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T05:47:36.239 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:47:36.297 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T05:47:36.298 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-10T05:47:36.315 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-10T05:47:36.315 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:NAME="CentOS Stream" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:ID="centos" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE="rhel fedora" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:PLATFORM_ID="platform:el9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:ANSI_COLOR="0;31" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:LOGO="fedora-logo-icon" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://centos.org/" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T05:47:36.373 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T05:47:36.374 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-10T05:47:36.379 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-10T05:47:36.399 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-10T05:47:36.399 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:NAME="CentOS Stream" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:ID="centos" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE="rhel fedora" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:PLATFORM_ID="platform:el9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:ANSI_COLOR="0;31" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:LOGO="fedora-logo-icon" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://centos.org/" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T05:47:36.457 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T05:47:36.457 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-10T05:47:36.462 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T05:47:36.476 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T05:47:36.476 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:NAME="CentOS Stream" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:ID="centos" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE="rhel fedora" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:PLATFORM_ID="platform:el9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:ANSI_COLOR="0;31" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:LOGO="fedora-logo-icon" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://centos.org/" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T05:47:36.531 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T05:47:36.532 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T05:47:36.535 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T05:47:36.537 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T05:47:36.538 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T05:47:36.538 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-10T05:47:36.539 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-10T05:47:36.541 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T05:47:36.585 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T05:47:36.587 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T05:47:36.587 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-10T05:47:36.596 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-10T05:47:36.598 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T05:47:36.612 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T05:47:36.612 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T05:47:36.642 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T05:47:36.642 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T05:47:36.650 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-10T05:47:36.666 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:36.858 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-10T05:47:36.872 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:37.062 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T05:47:37.078 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:37.276 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T05:47:37.277 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T05:47:37.277 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T05:47:37.280 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T05:47:37.281 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T05:47:37.298 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T05:47:37.299 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T05:47:37.300 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T05:47:37.300 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T05:47:37.336 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T05:47:37.339 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T05:47:37.359 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T05:47:37.361 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T05:47:37.361 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T05:47:37.404 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:37.405 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T05:47:37.419 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:37.419 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T05:47:37.433 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:47:37.433 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T05:47:37.446 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T05:47:37.461 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T05:47:37.471 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.483 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.486 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.496 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.503 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.513 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:47:37.516 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T05:47:37.517 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T05:47:37.517 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T05:47:37.526 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T05:47:37.540 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T05:47:37.581 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T05:47:37.583 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T05:47:37.583 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T05:47:37.592 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T05:47:37.608 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T05:47:37.636 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T05:47:37.670 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T05:47:37.726 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:47:37.726 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T05:47:37.783 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T05:47:37.804 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T05:47:37.860 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:47:37.860 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T05:47:37.919 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T05:47:37.942 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T05:47:38.002 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:47:38.002 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T05:47:38.064 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-10T05:47:38.065 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-10T05:47:38.067 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T05:47:38.094 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T05:47:38.098 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T05:47:38.132 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T05:47:38.582 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T05:47:38.584 INFO:teuthology.task.internal:Starting timer... 2026-03-10T05:47:38.584 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T05:47:38.587 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T05:47:38.589 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0']} 2026-03-10T05:47:38.589 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-10T05:47:38.589 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-10T05:47:38.589 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T05:47:38.589 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T05:47:38.589 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T05:47:38.589 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T05:47:38.589 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T05:47:38.590 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T05:47:38.591 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T05:47:38.592 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T05:47:39.388 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T05:47:39.394 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T05:47:39.395 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryxug4fuhq --limit vm04.local,vm06.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T05:49:58.177 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm06.local'), Remote(name='ubuntu@vm08.local')] 2026-03-10T05:49:58.178 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-10T05:49:58.178 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:49:58.247 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-10T05:49:58.331 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-10T05:49:58.332 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-10T05:49:58.332 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:49:58.397 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-10T05:49:58.476 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-10T05:49:58.476 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T05:49:58.477 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:49:58.538 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T05:49:58.622 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T05:49:58.622 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T05:49:58.624 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T05:49:58.625 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T05:49:58.625 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T05:49:58.628 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T05:49:58.628 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T05:49:58.631 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T05:49:58.631 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T05:49:58.661 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T05:49:58.672 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T05:49:58.677 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T05:49:58.696 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T05:49:58.696 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T05:49:58.708 INFO:teuthology.orchestra.run.vm04.stderr:sudo: ntpd: command not found 2026-03-10T05:49:58.711 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T05:49:58.723 INFO:teuthology.orchestra.run.vm04.stdout:506 Cannot talk to daemon 2026-03-10T05:49:58.733 INFO:teuthology.orchestra.run.vm06.stderr:sudo: ntpd: command not found 2026-03-10T05:49:58.740 INFO:teuthology.orchestra.run.vm08.stderr:sudo: ntpd: command not found 2026-03-10T05:49:58.742 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T05:49:58.743 INFO:teuthology.orchestra.run.vm06.stdout:506 Cannot talk to daemon 2026-03-10T05:49:58.755 INFO:teuthology.orchestra.run.vm08.stdout:506 Cannot talk to daemon 2026-03-10T05:49:58.758 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T05:49:58.761 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T05:49:58.771 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T05:49:58.773 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T05:49:58.792 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T05:49:58.816 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-10T05:49:58.826 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-10T05:49:58.843 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:^? vps-fra8.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:^? 139-144-71-56.ip.linodeu> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:^? static.236.223.13.49.cli> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.900 INFO:teuthology.orchestra.run.vm04.stdout:^? bond1-1201.fsn-lf-s02.pr> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:^? 139-144-71-56.ip.linodeu> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:^? static.236.223.13.49.cli> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:^? bond1-1201.fsn-lf-s02.pr> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm06.stdout:^? vps-fra8.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:^? 139-144-71-56.ip.linodeu> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:^? static.236.223.13.49.cli> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:^? bond1-1201.fsn-lf-s02.pr> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.orchestra.run.vm08.stdout:^? vps-fra8.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T05:49:58.901 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T05:49:58.904 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T05:49:58.904 DEBUG:teuthology.orchestra.run.vm04:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T05:49:58.904 DEBUG:teuthology.orchestra.run.vm06:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T05:49:58.904 DEBUG:teuthology.orchestra.run.vm08:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T05:49:58.907 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.907 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.907 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.907 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.907 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm06.local 2026-03-10T05:49:58.907 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.907 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.907 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.907 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.944 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.944 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.944 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.944 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.945 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm04.local 2026-03-10T05:49:58.945 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.945 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.945 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.945 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.946 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.946 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.946 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.946 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.946 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm08.local 2026-03-10T05:49:58.946 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T05:49:58.946 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T05:49:58.946 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:58.946 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T05:49:59.139 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: nvme-cli 2026-03-10T05:49:59.139 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T05:49:59.144 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T05:49:59.144 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T05:49:59.144 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T05:49:59.158 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: nvme-cli 2026-03-10T05:49:59.158 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T05:49:59.161 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T05:49:59.163 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T05:49:59.163 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T05:49:59.164 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: nvme-cli 2026-03-10T05:49:59.164 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T05:49:59.169 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T05:49:59.169 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T05:49:59.169 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T05:49:59.531 INFO:teuthology.orchestra.run.vm08.stdout:Last metadata expiration check: 0:01:45 ago on Tue 10 Mar 2026 05:48:14 AM UTC. 2026-03-10T05:49:59.577 INFO:teuthology.orchestra.run.vm04.stdout:Last metadata expiration check: 0:01:32 ago on Tue 10 Mar 2026 05:48:27 AM UTC. 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout:Install 7 Packages 2026-03-10T05:49:59.636 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:49:59.637 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 6.3 M 2026-03-10T05:49:59.637 INFO:teuthology.orchestra.run.vm08.stdout:Installed size: 24 M 2026-03-10T05:49:59.637 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T05:49:59.684 INFO:teuthology.orchestra.run.vm06.stdout:Last metadata expiration check: 0:01:06 ago on Tue 10 Mar 2026 05:48:53 AM UTC. 2026-03-10T05:49:59.684 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout:Install 7 Packages 2026-03-10T05:49:59.685 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:49:59.686 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 6.3 M 2026-03-10T05:49:59.686 INFO:teuthology.orchestra.run.vm04.stdout:Installed size: 24 M 2026-03-10T05:49:59.686 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout:Installing: 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T05:49:59.822 INFO:teuthology.orchestra.run.vm06.stdout:Installing dependencies: 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:Install 7 Packages 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:Total download size: 6.3 M 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:Installed size: 24 M 2026-03-10T05:49:59.823 INFO:teuthology.orchestra.run.vm06.stdout:Downloading Packages: 2026-03-10T05:50:00.260 INFO:teuthology.orchestra.run.vm04.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 117 kB/s | 44 kB 00:00 2026-03-10T05:50:00.328 INFO:teuthology.orchestra.run.vm08.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 768 kB/s | 44 kB 00:00 2026-03-10T05:50:00.336 INFO:teuthology.orchestra.run.vm08.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 1.1 MB/s | 72 kB 00:00 2026-03-10T05:50:00.361 INFO:teuthology.orchestra.run.vm08.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 2.5 MB/s | 84 kB 00:00 2026-03-10T05:50:00.366 INFO:teuthology.orchestra.run.vm04.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 149 kB/s | 72 kB 00:00 2026-03-10T05:50:00.374 INFO:teuthology.orchestra.run.vm08.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 3.9 MB/s | 150 kB 00:00 2026-03-10T05:50:00.396 INFO:teuthology.orchestra.run.vm08.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 9.2 MB/s | 1.2 MB 00:00 2026-03-10T05:50:00.423 INFO:teuthology.orchestra.run.vm08.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 13 MB/s | 837 kB 00:00 2026-03-10T05:50:00.505 INFO:teuthology.orchestra.run.vm08.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 30 MB/s | 4.0 MB 00:00 2026-03-10T05:50:00.505 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:00.505 INFO:teuthology.orchestra.run.vm08.stdout:Total 7.2 MB/s | 6.3 MB 00:00 2026-03-10T05:50:00.525 INFO:teuthology.orchestra.run.vm04.stdout:(3/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 952 kB/s | 150 kB 00:00 2026-03-10T05:50:00.588 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T05:50:00.597 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T05:50:00.597 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T05:50:00.602 INFO:teuthology.orchestra.run.vm04.stdout:(4/7): nvme-cli-2.16-1.el9.x86_64.rpm 1.6 MB/s | 1.2 MB 00:00 2026-03-10T05:50:00.665 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T05:50:00.665 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T05:50:00.754 INFO:teuthology.orchestra.run.vm06.stdout:(1/7): python3-configshell-1.1.30-1.el9.noarch. 3.5 MB/s | 72 kB 00:00 2026-03-10T05:50:00.760 INFO:teuthology.orchestra.run.vm04.stdout:(5/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 3.5 MB/s | 837 kB 00:00 2026-03-10T05:50:00.772 INFO:teuthology.orchestra.run.vm06.stdout:(2/7): python3-kmod-0.9-32.el9.x86_64.rpm 4.7 MB/s | 84 kB 00:00 2026-03-10T05:50:00.780 INFO:teuthology.orchestra.run.vm04.stdout:(6/7): python3-kmod-0.9-32.el9.x86_64.rpm 162 kB/s | 84 kB 00:00 2026-03-10T05:50:00.796 INFO:teuthology.orchestra.run.vm06.stdout:(3/7): nvmetcli-0.8-3.el9.noarch.rpm 704 kB/s | 44 kB 00:00 2026-03-10T05:50:00.799 INFO:teuthology.orchestra.run.vm06.stdout:(4/7): nvme-cli-2.16-1.el9.x86_64.rpm 18 MB/s | 1.2 MB 00:00 2026-03-10T05:50:00.803 INFO:teuthology.orchestra.run.vm06.stdout:(5/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 4.8 MB/s | 150 kB 00:00 2026-03-10T05:50:00.837 INFO:teuthology.orchestra.run.vm06.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 20 MB/s | 837 kB 00:00 2026-03-10T05:50:00.842 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T05:50:00.853 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T05:50:00.866 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T05:50:00.874 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:00.882 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:00.883 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:00.934 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:01.089 INFO:teuthology.orchestra.run.vm08.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T05:50:01.095 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:01.224 INFO:teuthology.orchestra.run.vm04.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 6.4 MB/s | 4.0 MB 00:00 2026-03-10T05:50:01.226 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:01.227 INFO:teuthology.orchestra.run.vm04.stdout:Total 4.1 MB/s | 6.3 MB 00:01 2026-03-10T05:50:01.290 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T05:50:01.300 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T05:50:01.300 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T05:50:01.370 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T05:50:01.370 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T05:50:01.440 INFO:teuthology.orchestra.run.vm06.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 6.2 MB/s | 4.0 MB 00:00 2026-03-10T05:50:01.440 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:01.440 INFO:teuthology.orchestra.run.vm06.stdout:Total 3.9 MB/s | 6.3 MB 00:01 2026-03-10T05:50:01.490 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:01.490 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T05:50:01.490 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:01.545 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T05:50:01.545 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T05:50:01.557 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T05:50:01.557 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T05:50:01.557 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T05:50:01.568 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T05:50:01.575 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:01.583 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:01.584 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:01.637 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:01.638 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T05:50:01.638 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T05:50:01.778 INFO:teuthology.orchestra.run.vm04.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T05:50:01.790 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:01.852 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T05:50:01.867 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T05:50:01.879 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T05:50:01.886 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:01.897 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:01.904 INFO:teuthology.orchestra.run.vm06.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:01.970 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T05:50:02.124 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T05:50:02.167 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:02.167 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T05:50:02.167 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:02.211 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T05:50:02.217 INFO:teuthology.orchestra.run.vm06.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T05:50:02.225 INFO:teuthology.orchestra.run.vm06.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:02.336 DEBUG:teuthology.parallel:result is None 2026-03-10T05:50:02.715 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T05:50:02.715 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T05:50:02.715 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:50:02.767 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T05:50:02.768 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T05:50:02.768 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:02.768 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:02.768 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T05:50:02.768 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:50:02.866 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T05:50:02.971 DEBUG:teuthology.parallel:result is None 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T05:50:03.347 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T05:50:03.470 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout:Installed: 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:50:03.471 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T05:50:03.617 DEBUG:teuthology.parallel:result is None 2026-03-10T05:50:03.617 INFO:teuthology.run_tasks:Running task install... 2026-03-10T05:50:03.619 DEBUG:teuthology.task.install:project ceph 2026-03-10T05:50:03.619 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T05:50:03.619 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T05:50:03.619 INFO:teuthology.task.install:Using flavor: default 2026-03-10T05:50:03.622 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T05:50:03.622 INFO:teuthology.task.install:extra packages: [] 2026-03-10T05:50:03.622 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T05:50:03.622 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:50:03.623 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T05:50:03.623 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:50:03.623 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T05:50:03.623 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:50:04.269 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T05:50:04.269 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T05:50:04.329 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T05:50:04.329 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T05:50:04.393 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T05:50:04.393 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T05:50:04.839 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T05:50:04.840 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:50:04.840 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T05:50:04.867 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T05:50:04.867 DEBUG:teuthology.orchestra.run.vm04:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T05:50:04.935 DEBUG:teuthology.orchestra.run.vm04:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T05:50:04.961 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T05:50:04.961 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:50:04.961 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T05:50:04.992 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T05:50:04.992 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:50:04.992 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T05:50:04.993 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T05:50:04.993 DEBUG:teuthology.orchestra.run.vm06:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T05:50:05.007 DEBUG:teuthology.orchestra.run.vm04:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T05:50:05.021 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T05:50:05.021 DEBUG:teuthology.orchestra.run.vm08:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T05:50:05.068 DEBUG:teuthology.orchestra.run.vm06:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T05:50:05.076 INFO:teuthology.orchestra.run.vm04.stdout:check_obsoletes = 1 2026-03-10T05:50:05.078 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-10T05:50:05.087 DEBUG:teuthology.orchestra.run.vm08:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T05:50:05.155 DEBUG:teuthology.orchestra.run.vm06:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T05:50:05.169 DEBUG:teuthology.orchestra.run.vm08:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T05:50:05.188 INFO:teuthology.orchestra.run.vm06.stdout:check_obsoletes = 1 2026-03-10T05:50:05.190 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-10T05:50:05.245 INFO:teuthology.orchestra.run.vm08.stdout:check_obsoletes = 1 2026-03-10T05:50:05.247 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T05:50:05.251 INFO:teuthology.orchestra.run.vm04.stdout:41 files removed 2026-03-10T05:50:05.271 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T05:50:05.409 INFO:teuthology.orchestra.run.vm06.stdout:41 files removed 2026-03-10T05:50:05.436 INFO:teuthology.orchestra.run.vm08.stdout:41 files removed 2026-03-10T05:50:05.455 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T05:50:05.468 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T05:50:06.773 INFO:teuthology.orchestra.run.vm04.stdout:ceph packages for x86_64 63 kB/s | 84 kB 00:01 2026-03-10T05:50:07.101 INFO:teuthology.orchestra.run.vm08.stdout:ceph packages for x86_64 57 kB/s | 84 kB 00:01 2026-03-10T05:50:07.135 INFO:teuthology.orchestra.run.vm06.stdout:ceph packages for x86_64 57 kB/s | 84 kB 00:01 2026-03-10T05:50:08.064 INFO:teuthology.orchestra.run.vm04.stdout:ceph noarch packages 9.1 kB/s | 12 kB 00:01 2026-03-10T05:50:08.391 INFO:teuthology.orchestra.run.vm08.stdout:ceph noarch packages 9.2 kB/s | 12 kB 00:01 2026-03-10T05:50:08.443 INFO:teuthology.orchestra.run.vm06.stdout:ceph noarch packages 9.1 kB/s | 12 kB 00:01 2026-03-10T05:50:09.198 INFO:teuthology.orchestra.run.vm04.stdout:ceph source packages 1.7 kB/s | 1.9 kB 00:01 2026-03-10T05:50:09.432 INFO:teuthology.orchestra.run.vm08.stdout:ceph source packages 1.9 kB/s | 1.9 kB 00:01 2026-03-10T05:50:09.473 INFO:teuthology.orchestra.run.vm06.stdout:ceph source packages 1.9 kB/s | 1.9 kB 00:01 2026-03-10T05:50:10.146 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - BaseOS 9.6 MB/s | 8.9 MB 00:00 2026-03-10T05:50:10.216 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - BaseOS 12 MB/s | 8.9 MB 00:00 2026-03-10T05:50:10.362 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - BaseOS 10 MB/s | 8.9 MB 00:00 2026-03-10T05:50:12.753 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - AppStream 14 MB/s | 27 MB 00:01 2026-03-10T05:50:13.186 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-10T05:50:14.222 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - AppStream 9.2 MB/s | 27 MB 00:02 2026-03-10T05:50:17.921 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - CRB 8.0 MB/s | 8.0 MB 00:00 2026-03-10T05:50:18.173 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - CRB 2.9 MB/s | 8.0 MB 00:02 2026-03-10T05:50:18.610 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - CRB 3.0 MB/s | 8.0 MB 00:02 2026-03-10T05:50:19.672 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - Extras packages 22 kB/s | 20 kB 00:00 2026-03-10T05:50:19.677 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - Extras packages 30 kB/s | 20 kB 00:00 2026-03-10T05:50:19.678 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - Extras packages 86 kB/s | 20 kB 00:00 2026-03-10T05:50:20.646 INFO:teuthology.orchestra.run.vm08.stdout:Extra Packages for Enterprise Linux 23 MB/s | 20 MB 00:00 2026-03-10T05:50:20.958 INFO:teuthology.orchestra.run.vm06.stdout:Extra Packages for Enterprise Linux 17 MB/s | 20 MB 00:01 2026-03-10T05:50:25.219 INFO:teuthology.orchestra.run.vm08.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T05:50:25.642 INFO:teuthology.orchestra.run.vm06.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T05:50:26.378 INFO:teuthology.orchestra.run.vm04.stdout:Extra Packages for Enterprise Linux 3.1 MB/s | 20 MB 00:06 2026-03-10T05:50:26.581 INFO:teuthology.orchestra.run.vm08.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:26.581 INFO:teuthology.orchestra.run.vm08.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:26.585 INFO:teuthology.orchestra.run.vm08.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T05:50:26.586 INFO:teuthology.orchestra.run.vm08.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T05:50:26.613 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T05:50:26.617 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout:Upgrading: 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T05:50:26.618 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T05:50:26.619 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Installing weak dependencies: 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Install 134 Packages 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Upgrade 2 Packages 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 210 M 2026-03-10T05:50:26.620 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T05:50:27.019 INFO:teuthology.orchestra.run.vm06.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:27.020 INFO:teuthology.orchestra.run.vm06.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:27.025 INFO:teuthology.orchestra.run.vm06.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T05:50:27.025 INFO:teuthology.orchestra.run.vm06.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T05:50:27.053 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout:Installing: 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T05:50:27.057 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout:Upgrading: 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout:Installing dependencies: 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T05:50:27.058 INFO:teuthology.orchestra.run.vm06.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T05:50:27.059 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Installing weak dependencies: 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Install 134 Packages 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Upgrade 2 Packages 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Total download size: 210 M 2026-03-10T05:50:27.060 INFO:teuthology.orchestra.run.vm06.stdout:Downloading Packages: 2026-03-10T05:50:28.294 INFO:teuthology.orchestra.run.vm08.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T05:50:28.295 INFO:teuthology.orchestra.run.vm06.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T05:50:29.196 INFO:teuthology.orchestra.run.vm06.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.3 MB/s | 1.2 MB 00:00 2026-03-10T05:50:29.225 INFO:teuthology.orchestra.run.vm08.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.2 MB/s | 1.2 MB 00:00 2026-03-10T05:50:29.312 INFO:teuthology.orchestra.run.vm06.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T05:50:29.343 INFO:teuthology.orchestra.run.vm08.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T05:50:30.221 INFO:teuthology.orchestra.run.vm06.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.7 MB/s | 2.4 MB 00:00 2026-03-10T05:50:30.277 INFO:teuthology.orchestra.run.vm08.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.6 MB/s | 2.4 MB 00:00 2026-03-10T05:50:30.554 INFO:teuthology.orchestra.run.vm06.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.0 MB/s | 5.5 MB 00:02 2026-03-10T05:50:30.565 INFO:teuthology.orchestra.run.vm06.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 3.1 MB/s | 1.1 MB 00:00 2026-03-10T05:50:30.578 INFO:teuthology.orchestra.run.vm08.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.0 MB/s | 5.5 MB 00:02 2026-03-10T05:50:30.631 INFO:teuthology.orchestra.run.vm08.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 3.0 MB/s | 1.1 MB 00:00 2026-03-10T05:50:30.785 INFO:teuthology.orchestra.run.vm04.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T05:50:31.517 INFO:teuthology.orchestra.run.vm06.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.9 MB/s | 4.7 MB 00:00 2026-03-10T05:50:31.550 INFO:teuthology.orchestra.run.vm08.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.9 MB/s | 4.7 MB 00:00 2026-03-10T05:50:32.146 INFO:teuthology.orchestra.run.vm04.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:32.146 INFO:teuthology.orchestra.run.vm04.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T05:50:32.150 INFO:teuthology.orchestra.run.vm04.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T05:50:32.150 INFO:teuthology.orchestra.run.vm04.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T05:50:32.178 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T05:50:32.182 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-10T05:50:32.182 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T05:50:32.182 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-10T05:50:32.182 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-10T05:50:32.182 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout:Upgrading: 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T05:50:32.183 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T05:50:32.184 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Installing weak dependencies: 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Install 134 Packages 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Upgrade 2 Packages 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 210 M 2026-03-10T05:50:32.185 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-10T05:50:32.755 INFO:teuthology.orchestra.run.vm06.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 4.4 MB/s | 22 MB 00:04 2026-03-10T05:50:32.887 INFO:teuthology.orchestra.run.vm06.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 190 kB/s | 25 kB 00:00 2026-03-10T05:50:32.957 INFO:teuthology.orchestra.run.vm06.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 7.5 MB/s | 11 MB 00:01 2026-03-10T05:50:33.026 INFO:teuthology.orchestra.run.vm08.stdout:(8/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 7.3 MB/s | 11 MB 00:01 2026-03-10T05:50:33.107 INFO:teuthology.orchestra.run.vm06.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 224 kB/s | 34 kB 00:00 2026-03-10T05:50:33.107 INFO:teuthology.orchestra.run.vm08.stdout:(9/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 4.1 MB/s | 22 MB 00:05 2026-03-10T05:50:33.158 INFO:teuthology.orchestra.run.vm08.stdout:(10/136): ceph-selinux-19.2.3-678.ge911bdeb.el9 190 kB/s | 25 kB 00:00 2026-03-10T05:50:33.158 INFO:teuthology.orchestra.run.vm06.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 6.6 MB/s | 17 MB 00:02 2026-03-10T05:50:33.241 INFO:teuthology.orchestra.run.vm06.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.3 MB/s | 1.0 MB 00:00 2026-03-10T05:50:33.274 INFO:teuthology.orchestra.run.vm08.stdout:(11/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 6.5 MB/s | 17 MB 00:02 2026-03-10T05:50:33.278 INFO:teuthology.orchestra.run.vm06.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T05:50:33.285 INFO:teuthology.orchestra.run.vm08.stdout:(12/136): libcephfs-devel-19.2.3-678.ge911bdeb. 265 kB/s | 34 kB 00:00 2026-03-10T05:50:33.362 INFO:teuthology.orchestra.run.vm06.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T05:50:33.405 INFO:teuthology.orchestra.run.vm08.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.5 MB/s | 1.0 MB 00:00 2026-03-10T05:50:33.424 INFO:teuthology.orchestra.run.vm06.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.4 MB/s | 503 kB 00:00 2026-03-10T05:50:33.432 INFO:teuthology.orchestra.run.vm08.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.1 MB/s | 163 kB 00:00 2026-03-10T05:50:33.523 INFO:teuthology.orchestra.run.vm08.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T05:50:33.538 INFO:teuthology.orchestra.run.vm06.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 398 kB/s | 45 kB 00:00 2026-03-10T05:50:33.558 INFO:teuthology.orchestra.run.vm08.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.9 MB/s | 503 kB 00:00 2026-03-10T05:50:33.668 INFO:teuthology.orchestra.run.vm06.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-10T05:50:33.681 INFO:teuthology.orchestra.run.vm08.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 365 kB/s | 45 kB 00:00 2026-03-10T05:50:33.786 INFO:teuthology.orchestra.run.vm06.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-10T05:50:33.872 INFO:teuthology.orchestra.run.vm06.stdout:(20/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-10T05:50:33.906 INFO:teuthology.orchestra.run.vm06.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-10T05:50:33.998 INFO:teuthology.orchestra.run.vm06.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-10T05:50:34.022 INFO:teuthology.orchestra.run.vm06.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 863 kB/s | 100 kB 00:00 2026-03-10T05:50:34.089 INFO:teuthology.orchestra.run.vm04.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T05:50:34.118 INFO:teuthology.orchestra.run.vm06.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 715 kB/s | 85 kB 00:00 2026-03-10T05:50:34.134 INFO:teuthology.orchestra.run.vm08.stdout:(18/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 8.9 MB/s | 5.4 MB 00:00 2026-03-10T05:50:34.238 INFO:teuthology.orchestra.run.vm06.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T05:50:34.262 INFO:teuthology.orchestra.run.vm08.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T05:50:34.329 INFO:teuthology.orchestra.run.vm08.stdout:(20/136): python3-ceph-common-19.2.3-678.ge911b 219 kB/s | 142 kB 00:00 2026-03-10T05:50:34.358 INFO:teuthology.orchestra.run.vm06.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 260 kB/s | 31 kB 00:00 2026-03-10T05:50:34.392 INFO:teuthology.orchestra.run.vm08.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.4 MB/s | 323 kB 00:00 2026-03-10T05:50:34.392 INFO:teuthology.orchestra.run.vm06.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.4 MB/s | 3.1 MB 00:00 2026-03-10T05:50:34.452 INFO:teuthology.orchestra.run.vm08.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-10T05:50:34.478 INFO:teuthology.orchestra.run.vm06.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T05:50:34.515 INFO:teuthology.orchestra.run.vm08.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 813 kB/s | 100 kB 00:00 2026-03-10T05:50:34.573 INFO:teuthology.orchestra.run.vm08.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 703 kB/s | 85 kB 00:00 2026-03-10T05:50:34.708 INFO:teuthology.orchestra.run.vm08.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-10T05:50:34.784 INFO:teuthology.orchestra.run.vm06.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 9.7 MB/s | 3.8 MB 00:00 2026-03-10T05:50:34.829 INFO:teuthology.orchestra.run.vm08.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 259 kB/s | 31 kB 00:00 2026-03-10T05:50:34.901 INFO:teuthology.orchestra.run.vm06.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.1 MB/s | 253 kB 00:00 2026-03-10T05:50:34.917 INFO:teuthology.orchestra.run.vm08.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 7.7 MB/s | 3.1 MB 00:00 2026-03-10T05:50:34.951 INFO:teuthology.orchestra.run.vm08.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T05:50:35.014 INFO:teuthology.orchestra.run.vm06.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 435 kB/s | 49 kB 00:00 2026-03-10T05:50:35.115 INFO:teuthology.orchestra.run.vm06.stdout:(32/136): ceph-mgr-diskprediction-local-19.2.3- 12 MB/s | 7.4 MB 00:00 2026-03-10T05:50:35.127 INFO:teuthology.orchestra.run.vm06.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 148 kB/s | 17 kB 00:00 2026-03-10T05:50:35.168 INFO:teuthology.orchestra.run.vm04.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.1 MB/s | 1.2 MB 00:01 2026-03-10T05:50:35.237 INFO:teuthology.orchestra.run.vm06.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T05:50:35.253 INFO:teuthology.orchestra.run.vm06.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 6.0 MB/s | 769 kB 00:00 2026-03-10T05:50:35.293 INFO:teuthology.orchestra.run.vm04.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.1 MB/s | 145 kB 00:00 2026-03-10T05:50:35.432 INFO:teuthology.orchestra.run.vm08.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 7.4 MB/s | 3.8 MB 00:00 2026-03-10T05:50:35.489 INFO:teuthology.orchestra.run.vm06.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 171 kB/s | 40 kB 00:00 2026-03-10T05:50:35.552 INFO:teuthology.orchestra.run.vm08.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.1 MB/s | 253 kB 00:00 2026-03-10T05:50:35.640 INFO:teuthology.orchestra.run.vm06.stdout:(37/136): libconfig-1.7.2-9.el9.x86_64.rpm 477 kB/s | 72 kB 00:00 2026-03-10T05:50:35.670 INFO:teuthology.orchestra.run.vm08.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 420 kB/s | 49 kB 00:00 2026-03-10T05:50:35.689 INFO:teuthology.orchestra.run.vm06.stdout:(38/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 776 kB/s | 351 kB 00:00 2026-03-10T05:50:35.773 INFO:teuthology.orchestra.run.vm06.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 2.1 MB/s | 184 kB 00:00 2026-03-10T05:50:35.787 INFO:teuthology.orchestra.run.vm08.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 143 kB/s | 17 kB 00:00 2026-03-10T05:50:35.844 INFO:teuthology.orchestra.run.vm06.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 470 kB/s | 33 kB 00:00 2026-03-10T05:50:35.908 INFO:teuthology.orchestra.run.vm08.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T05:50:35.924 INFO:teuthology.orchestra.run.vm06.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 2.7 MB/s | 794 kB 00:00 2026-03-10T05:50:35.924 INFO:teuthology.orchestra.run.vm06.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.1 MB/s | 93 kB 00:00 2026-03-10T05:50:35.947 INFO:teuthology.orchestra.run.vm08.stdout:(34/136): ceph-mgr-diskprediction-local-19.2.3- 7.4 MB/s | 7.4 MB 00:00 2026-03-10T05:50:36.017 INFO:teuthology.orchestra.run.vm06.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.7 MB/s | 253 kB 00:00 2026-03-10T05:50:36.079 INFO:teuthology.orchestra.run.vm06.stdout:(44/136): python3-cryptography-36.0.1-5.el9.x86 8.1 MB/s | 1.2 MB 00:00 2026-03-10T05:50:36.092 INFO:teuthology.orchestra.run.vm06.stdout:(45/136): python3-ply-3.11-14.el9.noarch.rpm 1.4 MB/s | 106 kB 00:00 2026-03-10T05:50:36.150 INFO:teuthology.orchestra.run.vm06.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 1.9 MB/s | 135 kB 00:00 2026-03-10T05:50:36.168 INFO:teuthology.orchestra.run.vm06.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 1.6 MB/s | 126 kB 00:00 2026-03-10T05:50:36.223 INFO:teuthology.orchestra.run.vm06.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.0 MB/s | 218 kB 00:00 2026-03-10T05:50:36.229 INFO:teuthology.orchestra.run.vm04.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 2.6 MB/s | 2.4 MB 00:00 2026-03-10T05:50:36.247 INFO:teuthology.orchestra.run.vm06.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 2.2 MB/s | 182 kB 00:00 2026-03-10T05:50:36.268 INFO:teuthology.orchestra.run.vm08.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 2.1 MB/s | 769 kB 00:00 2026-03-10T05:50:36.296 INFO:teuthology.orchestra.run.vm06.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.6 MB/s | 266 kB 00:00 2026-03-10T05:50:36.438 INFO:teuthology.orchestra.run.vm06.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 209 kB/s | 30 kB 00:00 2026-03-10T05:50:36.459 INFO:teuthology.orchestra.run.vm08.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 685 kB/s | 351 kB 00:00 2026-03-10T05:50:36.460 INFO:teuthology.orchestra.run.vm08.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 210 kB/s | 40 kB 00:00 2026-03-10T05:50:36.482 INFO:teuthology.orchestra.run.vm06.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 443 kB/s | 104 kB 00:00 2026-03-10T05:50:36.507 INFO:teuthology.orchestra.run.vm08.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 1.5 MB/s | 72 kB 00:00 2026-03-10T05:50:36.514 INFO:teuthology.orchestra.run.vm04.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 1.9 MB/s | 5.5 MB 00:02 2026-03-10T05:50:36.532 INFO:teuthology.orchestra.run.vm06.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 303 kB/s | 15 kB 00:00 2026-03-10T05:50:36.576 INFO:teuthology.orchestra.run.vm08.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 6.7 MB/s | 794 kB 00:00 2026-03-10T05:50:36.577 INFO:teuthology.orchestra.run.vm08.stdout:(40/136): libquadmath-11.5.0-14.el9.x86_64.rpm 2.6 MB/s | 184 kB 00:00 2026-03-10T05:50:36.584 INFO:teuthology.orchestra.run.vm04.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 3.0 MB/s | 1.1 MB 00:00 2026-03-10T05:50:36.622 INFO:teuthology.orchestra.run.vm08.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 716 kB/s | 33 kB 00:00 2026-03-10T05:50:36.624 INFO:teuthology.orchestra.run.vm08.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.9 MB/s | 93 kB 00:00 2026-03-10T05:50:36.628 INFO:teuthology.orchestra.run.vm06.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.7 MB/s | 164 kB 00:00 2026-03-10T05:50:36.672 INFO:teuthology.orchestra.run.vm08.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.1 MB/s | 253 kB 00:00 2026-03-10T05:50:36.686 INFO:teuthology.orchestra.run.vm06.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.7 MB/s | 160 kB 00:00 2026-03-10T05:50:36.734 INFO:teuthology.orchestra.run.vm08.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.7 MB/s | 106 kB 00:00 2026-03-10T05:50:36.736 INFO:teuthology.orchestra.run.vm06.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 913 kB/s | 45 kB 00:00 2026-03-10T05:50:36.763 INFO:teuthology.orchestra.run.vm08.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 9.0 MB/s | 1.2 MB 00:00 2026-03-10T05:50:36.782 INFO:teuthology.orchestra.run.vm08.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 2.8 MB/s | 135 kB 00:00 2026-03-10T05:50:36.811 INFO:teuthology.orchestra.run.vm08.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 2.6 MB/s | 126 kB 00:00 2026-03-10T05:50:36.835 INFO:teuthology.orchestra.run.vm08.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 4.0 MB/s | 218 kB 00:00 2026-03-10T05:50:36.839 INFO:teuthology.orchestra.run.vm06.stdout:(57/136): librdkafka-1.6.1-102.el9.x86_64.rpm 6.3 MB/s | 662 kB 00:00 2026-03-10T05:50:36.860 INFO:teuthology.orchestra.run.vm08.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 3.6 MB/s | 182 kB 00:00 2026-03-10T05:50:36.885 INFO:teuthology.orchestra.run.vm08.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 5.3 MB/s | 266 kB 00:00 2026-03-10T05:50:36.891 INFO:teuthology.orchestra.run.vm06.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.7 MB/s | 246 kB 00:00 2026-03-10T05:50:36.948 INFO:teuthology.orchestra.run.vm06.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 4.0 MB/s | 233 kB 00:00 2026-03-10T05:50:37.001 INFO:teuthology.orchestra.run.vm06.stdout:(60/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 5.4 MB/s | 292 kB 00:00 2026-03-10T05:50:37.054 INFO:teuthology.orchestra.run.vm06.stdout:(61/136): lua-5.4.4-4.el9.x86_64.rpm 3.5 MB/s | 188 kB 00:00 2026-03-10T05:50:37.076 INFO:teuthology.orchestra.run.vm08.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 155 kB/s | 30 kB 00:00 2026-03-10T05:50:37.104 INFO:teuthology.orchestra.run.vm06.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 850 kB/s | 42 kB 00:00 2026-03-10T05:50:37.179 INFO:teuthology.orchestra.run.vm06.stdout:(63/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 4.0 MB/s | 3.0 MB 00:00 2026-03-10T05:50:37.181 INFO:teuthology.orchestra.run.vm08.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 325 kB/s | 104 kB 00:00 2026-03-10T05:50:37.246 INFO:teuthology.orchestra.run.vm08.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 230 kB/s | 15 kB 00:00 2026-03-10T05:50:37.357 INFO:teuthology.orchestra.run.vm06.stdout:(64/136): openblas-openmp-0.3.29-1.el9.x86_64.r 21 MB/s | 5.3 MB 00:00 2026-03-10T05:50:37.378 INFO:teuthology.orchestra.run.vm08.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.2 MB/s | 164 kB 00:00 2026-03-10T05:50:37.448 INFO:teuthology.orchestra.run.vm08.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.2 MB/s | 160 kB 00:00 2026-03-10T05:50:37.514 INFO:teuthology.orchestra.run.vm08.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 685 kB/s | 45 kB 00:00 2026-03-10T05:50:37.582 INFO:teuthology.orchestra.run.vm06.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 2.5 MB/s | 1.0 MB 00:00 2026-03-10T05:50:37.681 INFO:teuthology.orchestra.run.vm06.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 2.4 MB/s | 244 kB 00:00 2026-03-10T05:50:37.682 INFO:teuthology.orchestra.run.vm08.stdout:(57/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 11 MB/s | 50 MB 00:04 2026-03-10T05:50:37.685 INFO:teuthology.orchestra.run.vm08.stdout:(58/136): librdkafka-1.6.1-102.el9.x86_64.rpm 3.8 MB/s | 662 kB 00:00 2026-03-10T05:50:37.735 INFO:teuthology.orchestra.run.vm06.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 4.5 MB/s | 249 kB 00:00 2026-03-10T05:50:37.756 INFO:teuthology.orchestra.run.vm08.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 3.3 MB/s | 233 kB 00:00 2026-03-10T05:50:37.786 INFO:teuthology.orchestra.run.vm06.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 932 kB/s | 48 kB 00:00 2026-03-10T05:50:37.827 INFO:teuthology.orchestra.run.vm08.stdout:(60/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.0 MB/s | 292 kB 00:00 2026-03-10T05:50:37.837 INFO:teuthology.orchestra.run.vm06.stdout:(69/136): python3-libstoragemgmt-1.10.1-1.el9.x 3.4 MB/s | 177 kB 00:00 2026-03-10T05:50:37.882 INFO:teuthology.orchestra.run.vm06.stdout:(70/136): python3-babel-2.9.1-2.el9.noarch.rpm 11 MB/s | 6.0 MB 00:00 2026-03-10T05:50:37.890 INFO:teuthology.orchestra.run.vm06.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 3.2 MB/s | 172 kB 00:00 2026-03-10T05:50:37.895 INFO:teuthology.orchestra.run.vm08.stdout:(61/136): lua-5.4.4-4.el9.x86_64.rpm 2.7 MB/s | 188 kB 00:00 2026-03-10T05:50:37.933 INFO:teuthology.orchestra.run.vm06.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 698 kB/s | 35 kB 00:00 2026-03-10T05:50:37.963 INFO:teuthology.orchestra.run.vm08.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 630 kB/s | 42 kB 00:00 2026-03-10T05:50:37.989 INFO:teuthology.orchestra.run.vm06.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 7.6 MB/s | 442 kB 00:00 2026-03-10T05:50:38.043 INFO:teuthology.orchestra.run.vm06.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.4 MB/s | 77 kB 00:00 2026-03-10T05:50:38.077 INFO:teuthology.orchestra.run.vm08.stdout:(63/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 624 kB/s | 246 kB 00:00 2026-03-10T05:50:38.098 INFO:teuthology.orchestra.run.vm06.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 4.8 MB/s | 267 kB 00:00 2026-03-10T05:50:38.150 INFO:teuthology.orchestra.run.vm06.stdout:(76/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.0 MB/s | 157 kB 00:00 2026-03-10T05:50:38.202 INFO:teuthology.orchestra.run.vm06.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 5.2 MB/s | 277 kB 00:00 2026-03-10T05:50:38.245 INFO:teuthology.orchestra.run.vm06.stdout:(78/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 17 MB/s | 6.1 MB 00:00 2026-03-10T05:50:38.253 INFO:teuthology.orchestra.run.vm06.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 1.0 MB/s | 54 kB 00:00 2026-03-10T05:50:38.275 INFO:teuthology.orchestra.run.vm08.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 5.1 MB/s | 1.0 MB 00:00 2026-03-10T05:50:38.308 INFO:teuthology.orchestra.run.vm06.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 764 kB/s | 42 kB 00:00 2026-03-10T05:50:38.365 INFO:teuthology.orchestra.run.vm06.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 4.2 MB/s | 240 kB 00:00 2026-03-10T05:50:38.415 INFO:teuthology.orchestra.run.vm06.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 742 kB/s | 37 kB 00:00 2026-03-10T05:50:38.458 INFO:teuthology.orchestra.run.vm08.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 11 MB/s | 5.3 MB 00:00 2026-03-10T05:50:38.465 INFO:teuthology.orchestra.run.vm06.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.3 MB/s | 66 kB 00:00 2026-03-10T05:50:38.518 INFO:teuthology.orchestra.run.vm06.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 5.6 MB/s | 303 kB 00:00 2026-03-10T05:50:38.529 INFO:teuthology.orchestra.run.vm08.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 3.4 MB/s | 244 kB 00:00 2026-03-10T05:50:38.567 INFO:teuthology.orchestra.run.vm04.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 2.3 MB/s | 4.7 MB 00:02 2026-03-10T05:50:38.568 INFO:teuthology.orchestra.run.vm06.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.3 MB/s | 64 kB 00:00 2026-03-10T05:50:38.601 INFO:teuthology.orchestra.run.vm08.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 3.4 MB/s | 249 kB 00:00 2026-03-10T05:50:38.661 INFO:teuthology.orchestra.run.vm06.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 241 kB/s | 22 kB 00:00 2026-03-10T05:50:38.667 INFO:teuthology.orchestra.run.vm08.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 717 kB/s | 48 kB 00:00 2026-03-10T05:50:38.766 INFO:teuthology.orchestra.run.vm08.stdout:(69/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.8 MB/s | 177 kB 00:00 2026-03-10T05:50:38.766 INFO:teuthology.orchestra.run.vm04.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 4.2 MB/s | 22 MB 00:05 2026-03-10T05:50:38.852 INFO:teuthology.orchestra.run.vm08.stdout:(70/136): python3-babel-2.9.1-2.el9.noarch.rpm 10 MB/s | 6.0 MB 00:00 2026-03-10T05:50:38.852 INFO:teuthology.orchestra.run.vm06.stdout:(87/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 8.3 MB/s | 50 MB 00:05 2026-03-10T05:50:38.853 INFO:teuthology.orchestra.run.vm08.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 1.9 MB/s | 172 kB 00:00 2026-03-10T05:50:38.882 INFO:teuthology.orchestra.run.vm04.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 217 kB/s | 25 kB 00:00 2026-03-10T05:50:38.918 INFO:teuthology.orchestra.run.vm08.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 523 kB/s | 35 kB 00:00 2026-03-10T05:50:38.947 INFO:teuthology.orchestra.run.vm06.stdout:(88/136): protobuf-compiler-3.14.0-17.el9.x86_6 2.9 MB/s | 862 kB 00:00 2026-03-10T05:50:39.050 INFO:teuthology.orchestra.run.vm08.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.3 MB/s | 442 kB 00:00 2026-03-10T05:50:39.118 INFO:teuthology.orchestra.run.vm08.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.1 MB/s | 77 kB 00:00 2026-03-10T05:50:39.231 INFO:teuthology.orchestra.run.vm06.stdout:(89/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.4 MB/s | 551 kB 00:00 2026-03-10T05:50:39.263 INFO:teuthology.orchestra.run.vm06.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 605 kB/s | 19 kB 00:00 2026-03-10T05:50:39.327 INFO:teuthology.orchestra.run.vm08.stdout:(75/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 13 MB/s | 6.1 MB 00:00 2026-03-10T05:50:39.367 INFO:teuthology.orchestra.run.vm08.stdout:(76/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 1.3 MB/s | 3.0 MB 00:02 2026-03-10T05:50:39.395 INFO:teuthology.orchestra.run.vm08.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.3 MB/s | 157 kB 00:00 2026-03-10T05:50:39.465 INFO:teuthology.orchestra.run.vm08.stdout:(78/136): python3-requests-oauthlib-1.3.0-12.el 773 kB/s | 54 kB 00:00 2026-03-10T05:50:39.469 INFO:teuthology.orchestra.run.vm06.stdout:(91/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 589 kB/s | 308 kB 00:00 2026-03-10T05:50:39.522 INFO:teuthology.orchestra.run.vm06.stdout:(92/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 15 MB/s | 19 MB 00:01 2026-03-10T05:50:39.564 INFO:teuthology.orchestra.run.vm08.stdout:(79/136): python3-pyasn1-modules-0.4.8-7.el9.no 1.4 MB/s | 277 kB 00:00 2026-03-10T05:50:39.583 INFO:teuthology.orchestra.run.vm06.stdout:(93/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 220 kB/s | 25 kB 00:00 2026-03-10T05:50:39.630 INFO:teuthology.orchestra.run.vm08.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 628 kB/s | 42 kB 00:00 2026-03-10T05:50:39.643 INFO:teuthology.orchestra.run.vm06.stdout:(94/136): liboath-2.6.12-1.el9.x86_64.rpm 404 kB/s | 49 kB 00:00 2026-03-10T05:50:39.706 INFO:teuthology.orchestra.run.vm06.stdout:(95/136): libunwind-1.6.2-1.el9.x86_64.rpm 546 kB/s | 67 kB 00:00 2026-03-10T05:50:39.732 INFO:teuthology.orchestra.run.vm08.stdout:(81/136): python3-protobuf-3.14.0-17.el9.noarch 435 kB/s | 267 kB 00:00 2026-03-10T05:50:39.798 INFO:teuthology.orchestra.run.vm08.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 569 kB/s | 37 kB 00:00 2026-03-10T05:50:39.800 INFO:teuthology.orchestra.run.vm06.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 970 kB/s | 151 kB 00:00 2026-03-10T05:50:39.831 INFO:teuthology.orchestra.run.vm08.stdout:(83/136): qatlib-25.08.0-2.el9.x86_64.rpm 1.2 MB/s | 240 kB 00:00 2026-03-10T05:50:39.865 INFO:teuthology.orchestra.run.vm08.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 997 kB/s | 66 kB 00:00 2026-03-10T05:50:39.932 INFO:teuthology.orchestra.run.vm08.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 950 kB/s | 64 kB 00:00 2026-03-10T05:50:40.018 INFO:teuthology.orchestra.run.vm08.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 260 kB/s | 22 kB 00:00 2026-03-10T05:50:40.027 INFO:teuthology.orchestra.run.vm08.stdout:(87/136): socat-1.7.4.1-8.el9.x86_64.rpm 1.5 MB/s | 303 kB 00:00 2026-03-10T05:50:40.164 INFO:teuthology.orchestra.run.vm08.stdout:(88/136): protobuf-compiler-3.14.0-17.el9.x86_6 5.8 MB/s | 862 kB 00:00 2026-03-10T05:50:40.325 INFO:teuthology.orchestra.run.vm06.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 1.0 MB/s | 548 kB 00:00 2026-03-10T05:50:40.328 INFO:teuthology.orchestra.run.vm08.stdout:(89/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.8 MB/s | 551 kB 00:00 2026-03-10T05:50:40.347 INFO:teuthology.orchestra.run.vm08.stdout:(90/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 1.7 MB/s | 308 kB 00:00 2026-03-10T05:50:40.360 INFO:teuthology.orchestra.run.vm06.stdout:(98/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 1.3 MB/s | 838 kB 00:00 2026-03-10T05:50:40.366 INFO:teuthology.orchestra.run.vm08.stdout:(91/136): grpc-data-1.46.7-10.el9.noarch.rpm 517 kB/s | 19 kB 00:00 2026-03-10T05:50:40.441 INFO:teuthology.orchestra.run.vm08.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 329 kB/s | 25 kB 00:00 2026-03-10T05:50:40.544 INFO:teuthology.orchestra.run.vm08.stdout:(93/136): libarrow-9.0.0-15.el9.x86_64.rpm 22 MB/s | 4.4 MB 00:00 2026-03-10T05:50:40.545 INFO:teuthology.orchestra.run.vm06.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 134 kB/s | 29 kB 00:00 2026-03-10T05:50:40.563 INFO:teuthology.orchestra.run.vm08.stdout:(94/136): liboath-2.6.12-1.el9.x86_64.rpm 402 kB/s | 49 kB 00:00 2026-03-10T05:50:40.582 INFO:teuthology.orchestra.run.vm08.stdout:(95/136): libunwind-1.6.2-1.el9.x86_64.rpm 1.7 MB/s | 67 kB 00:00 2026-03-10T05:50:40.604 INFO:teuthology.orchestra.run.vm08.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 3.6 MB/s | 151 kB 00:00 2026-03-10T05:50:40.647 INFO:teuthology.orchestra.run.vm08.stdout:(97/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 13 MB/s | 838 kB 00:00 2026-03-10T05:50:40.662 INFO:teuthology.orchestra.run.vm08.stdout:(98/136): python3-asyncssh-2.13.2-5.el9.noarch. 9.3 MB/s | 548 kB 00:00 2026-03-10T05:50:40.670 INFO:teuthology.orchestra.run.vm06.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 194 kB/s | 60 kB 00:00 2026-03-10T05:50:40.683 INFO:teuthology.orchestra.run.vm08.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 818 kB/s | 29 kB 00:00 2026-03-10T05:50:40.702 INFO:teuthology.orchestra.run.vm08.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 1.5 MB/s | 60 kB 00:00 2026-03-10T05:50:40.723 INFO:teuthology.orchestra.run.vm08.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.1 MB/s | 43 kB 00:00 2026-03-10T05:50:40.740 INFO:teuthology.orchestra.run.vm08.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 842 kB/s | 32 kB 00:00 2026-03-10T05:50:40.761 INFO:teuthology.orchestra.run.vm08.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 371 kB/s | 14 kB 00:00 2026-03-10T05:50:40.785 INFO:teuthology.orchestra.run.vm08.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 3.8 MB/s | 173 kB 00:00 2026-03-10T05:50:40.810 INFO:teuthology.orchestra.run.vm08.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 7.2 MB/s | 358 kB 00:00 2026-03-10T05:50:40.832 INFO:teuthology.orchestra.run.vm08.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 5.3 MB/s | 254 kB 00:00 2026-03-10T05:50:40.855 INFO:teuthology.orchestra.run.vm06.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 141 kB/s | 43 kB 00:00 2026-03-10T05:50:40.924 INFO:teuthology.orchestra.run.vm08.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 1.5 MB/s | 144 kB 00:00 2026-03-10T05:50:40.932 INFO:teuthology.orchestra.run.vm08.stdout:(108/136): python3-grpcio-1.46.7-10.el9.x86_64. 17 MB/s | 2.0 MB 00:00 2026-03-10T05:50:40.966 INFO:teuthology.orchestra.run.vm08.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 261 kB/s | 11 kB 00:00 2026-03-10T05:50:40.969 INFO:teuthology.orchestra.run.vm08.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 486 kB/s | 18 kB 00:00 2026-03-10T05:50:40.980 INFO:teuthology.orchestra.run.vm06.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 104 kB/s | 32 kB 00:00 2026-03-10T05:50:41.005 INFO:teuthology.orchestra.run.vm08.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 589 kB/s | 23 kB 00:00 2026-03-10T05:50:41.006 INFO:teuthology.orchestra.run.vm08.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 533 kB/s | 20 kB 00:00 2026-03-10T05:50:41.043 INFO:teuthology.orchestra.run.vm08.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 517 kB/s | 19 kB 00:00 2026-03-10T05:50:41.048 INFO:teuthology.orchestra.run.vm08.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 626 kB/s | 26 kB 00:00 2026-03-10T05:50:41.091 INFO:teuthology.orchestra.run.vm08.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 1.1 MB/s | 46 kB 00:00 2026-03-10T05:50:41.106 INFO:teuthology.orchestra.run.vm06.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 56 kB/s | 14 kB 00:00 2026-03-10T05:50:41.108 INFO:teuthology.orchestra.run.vm08.stdout:(116/136): python3-kubernetes-26.1.0-3.el9.noar 16 MB/s | 1.0 MB 00:00 2026-03-10T05:50:41.172 INFO:teuthology.orchestra.run.vm08.stdout:(117/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 11 MB/s | 19 MB 00:01 2026-03-10T05:50:41.173 INFO:teuthology.orchestra.run.vm08.stdout:(118/136): python3-more-itertools-8.12.0-2.el9. 955 kB/s | 79 kB 00:00 2026-03-10T05:50:41.174 INFO:teuthology.orchestra.run.vm08.stdout:(119/136): python3-natsort-7.1.1-5.el9.noarch.r 876 kB/s | 58 kB 00:00 2026-03-10T05:50:41.215 INFO:teuthology.orchestra.run.vm08.stdout:(120/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 2.2 MB/s | 90 kB 00:00 2026-03-10T05:50:41.215 INFO:teuthology.orchestra.run.vm08.stdout:(121/136): python3-portend-3.1.0-2.el9.noarch.r 398 kB/s | 16 kB 00:00 2026-03-10T05:50:41.222 INFO:teuthology.orchestra.run.vm08.stdout:(122/136): python3-pecan-1.4.2-3.el9.noarch.rpm 5.4 MB/s | 272 kB 00:00 2026-03-10T05:50:41.256 INFO:teuthology.orchestra.run.vm08.stdout:(123/136): python3-repoze-lru-0.7-16.el9.noarch 754 kB/s | 31 kB 00:00 2026-03-10T05:50:41.257 INFO:teuthology.orchestra.run.vm08.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 4.4 MB/s | 188 kB 00:00 2026-03-10T05:50:41.261 INFO:teuthology.orchestra.run.vm08.stdout:(125/136): python3-rsa-4.9-2.el9.noarch.rpm 1.5 MB/s | 59 kB 00:00 2026-03-10T05:50:41.290 INFO:teuthology.orchestra.run.vm08.stdout:(126/136): python3-tempora-5.0.0-2.el9.noarch.r 1.0 MB/s | 36 kB 00:00 2026-03-10T05:50:41.298 INFO:teuthology.orchestra.run.vm08.stdout:(127/136): python3-typing-extensions-4.15.0-1.e 2.1 MB/s | 86 kB 00:00 2026-03-10T05:50:41.301 INFO:teuthology.orchestra.run.vm08.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 5.7 MB/s | 230 kB 00:00 2026-03-10T05:50:41.329 INFO:teuthology.orchestra.run.vm08.stdout:(129/136): python3-websocket-client-1.2.3-2.el9 2.3 MB/s | 90 kB 00:00 2026-03-10T05:50:41.342 INFO:teuthology.orchestra.run.vm08.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 546 kB/s | 22 kB 00:00 2026-03-10T05:50:41.349 INFO:teuthology.orchestra.run.vm08.stdout:(131/136): python3-werkzeug-2.0.3-3.el9.1.noarc 8.2 MB/s | 427 kB 00:00 2026-03-10T05:50:41.371 INFO:teuthology.orchestra.run.vm08.stdout:(132/136): python3-zc-lockfile-2.0-10.el9.noarc 485 kB/s | 20 kB 00:00 2026-03-10T05:50:41.384 INFO:teuthology.orchestra.run.vm08.stdout:(133/136): re2-20211101-20.el9.x86_64.rpm 4.5 MB/s | 191 kB 00:00 2026-03-10T05:50:41.415 INFO:teuthology.orchestra.run.vm06.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 398 kB/s | 173 kB 00:00 2026-03-10T05:50:41.423 INFO:teuthology.orchestra.run.vm08.stdout:(134/136): thrift-0.15.0-4.el9.x86_64.rpm 21 MB/s | 1.6 MB 00:00 2026-03-10T05:50:41.817 INFO:teuthology.orchestra.run.vm06.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 505 kB/s | 358 kB 00:00 2026-03-10T05:50:41.878 INFO:teuthology.orchestra.run.vm06.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 549 kB/s | 254 kB 00:00 2026-03-10T05:50:42.248 INFO:teuthology.orchestra.run.vm06.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 393 kB/s | 144 kB 00:00 2026-03-10T05:50:42.258 INFO:teuthology.orchestra.run.vm06.stdout:(108/136): libarrow-9.0.0-15.el9.x86_64.rpm 1.5 MB/s | 4.4 MB 00:02 2026-03-10T05:50:42.403 INFO:teuthology.orchestra.run.vm06.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 68 kB/s | 11 kB 00:00 2026-03-10T05:50:42.442 INFO:teuthology.orchestra.run.vm06.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 96 kB/s | 18 kB 00:00 2026-03-10T05:50:42.442 INFO:teuthology.orchestra.run.vm04.stdout:(10/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 17 MB 00:05 2026-03-10T05:50:42.466 INFO:teuthology.orchestra.run.vm08.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.9 MB/s | 3.2 MB 00:01 2026-03-10T05:50:42.541 INFO:teuthology.orchestra.run.vm08.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.9 MB/s | 3.4 MB 00:01 2026-03-10T05:50:42.543 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:42.543 INFO:teuthology.orchestra.run.vm08.stdout:Total 13 MB/s | 210 MB 00:15 2026-03-10T05:50:42.560 INFO:teuthology.orchestra.run.vm04.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 286 kB/s | 34 kB 00:00 2026-03-10T05:50:42.566 INFO:teuthology.orchestra.run.vm06.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 142 kB/s | 23 kB 00:00 2026-03-10T05:50:42.597 INFO:teuthology.orchestra.run.vm06.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 127 kB/s | 20 kB 00:00 2026-03-10T05:50:42.720 INFO:teuthology.orchestra.run.vm06.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 127 kB/s | 19 kB 00:00 2026-03-10T05:50:42.752 INFO:teuthology.orchestra.run.vm06.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 170 kB/s | 26 kB 00:00 2026-03-10T05:50:42.917 INFO:teuthology.orchestra.run.vm06.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 282 kB/s | 46 kB 00:00 2026-03-10T05:50:43.029 INFO:teuthology.orchestra.run.vm04.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 2.1 MB/s | 1.0 MB 00:00 2026-03-10T05:50:43.128 INFO:teuthology.orchestra.run.vm06.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 374 kB/s | 79 kB 00:00 2026-03-10T05:50:43.137 INFO:teuthology.orchestra.run.vm06.stdout:(117/136): python3-grpcio-1.46.7-10.el9.x86_64. 1.5 MB/s | 2.0 MB 00:01 2026-03-10T05:50:43.146 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T05:50:43.148 INFO:teuthology.orchestra.run.vm04.stdout:(13/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T05:50:43.194 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T05:50:43.194 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T05:50:43.267 INFO:teuthology.orchestra.run.vm04.stdout:(14/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T05:50:43.273 INFO:teuthology.orchestra.run.vm06.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 399 kB/s | 58 kB 00:00 2026-03-10T05:50:43.400 INFO:teuthology.orchestra.run.vm06.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 130 kB/s | 16 kB 00:00 2026-03-10T05:50:43.437 INFO:teuthology.orchestra.run.vm06.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 911 kB/s | 272 kB 00:00 2026-03-10T05:50:43.448 INFO:teuthology.orchestra.run.vm06.stdout:(121/136): python3-kubernetes-26.1.0-3.el9.noar 1.4 MB/s | 1.0 MB 00:00 2026-03-10T05:50:43.469 INFO:teuthology.orchestra.run.vm06.stdout:(122/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.3 MB/s | 90 kB 00:00 2026-03-10T05:50:43.489 INFO:teuthology.orchestra.run.vm06.stdout:(123/136): python3-repoze-lru-0.7-16.el9.noarch 591 kB/s | 31 kB 00:00 2026-03-10T05:50:43.503 INFO:teuthology.orchestra.run.vm04.stdout:(15/136): libradosstriper1-19.2.3-678.ge911bdeb 2.1 MB/s | 503 kB 00:00 2026-03-10T05:50:43.539 INFO:teuthology.orchestra.run.vm06.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 2.0 MB/s | 188 kB 00:00 2026-03-10T05:50:43.554 INFO:teuthology.orchestra.run.vm06.stdout:(125/136): python3-rsa-4.9-2.el9.noarch.rpm 694 kB/s | 59 kB 00:00 2026-03-10T05:50:43.564 INFO:teuthology.orchestra.run.vm06.stdout:(126/136): python3-tempora-5.0.0-2.el9.noarch.r 478 kB/s | 36 kB 00:00 2026-03-10T05:50:43.579 INFO:teuthology.orchestra.run.vm04.stdout:(16/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 2.1 MB/s | 11 MB 00:05 2026-03-10T05:50:43.591 INFO:teuthology.orchestra.run.vm06.stdout:(127/136): python3-typing-extensions-4.15.0-1.e 1.6 MB/s | 86 kB 00:00 2026-03-10T05:50:43.663 INFO:teuthology.orchestra.run.vm06.stdout:(128/136): python3-webob-1.8.8-2.el9.noarch.rpm 2.1 MB/s | 230 kB 00:00 2026-03-10T05:50:43.685 INFO:teuthology.orchestra.run.vm06.stdout:(129/136): python3-websocket-client-1.2.3-2.el9 743 kB/s | 90 kB 00:00 2026-03-10T05:50:43.701 INFO:teuthology.orchestra.run.vm04.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 368 kB/s | 45 kB 00:00 2026-03-10T05:50:43.771 INFO:teuthology.orchestra.run.vm06.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 207 kB/s | 22 kB 00:00 2026-03-10T05:50:43.800 INFO:teuthology.orchestra.run.vm06.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 174 kB/s | 20 kB 00:00 2026-03-10T05:50:43.817 INFO:teuthology.orchestra.run.vm06.stdout:(132/136): python3-werkzeug-2.0.3-3.el9.1.noarc 1.9 MB/s | 427 kB 00:00 2026-03-10T05:50:43.840 INFO:teuthology.orchestra.run.vm04.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.0 MB/s | 142 kB 00:00 2026-03-10T05:50:43.869 INFO:teuthology.orchestra.run.vm06.stdout:(133/136): re2-20211101-20.el9.x86_64.rpm 1.9 MB/s | 191 kB 00:00 2026-03-10T05:50:43.963 INFO:teuthology.orchestra.run.vm04.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T05:50:44.036 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T05:50:44.036 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T05:50:44.205 INFO:teuthology.orchestra.run.vm04.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 1.3 MB/s | 323 kB 00:00 2026-03-10T05:50:44.353 INFO:teuthology.orchestra.run.vm06.stdout:(134/136): thrift-0.15.0-4.el9.x86_64.rpm 2.9 MB/s | 1.6 MB 00:00 2026-03-10T05:50:44.447 INFO:teuthology.orchestra.run.vm04.stdout:(21/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 1.2 MB/s | 303 kB 00:00 2026-03-10T05:50:44.569 INFO:teuthology.orchestra.run.vm04.stdout:(22/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 817 kB/s | 100 kB 00:00 2026-03-10T05:50:44.693 INFO:teuthology.orchestra.run.vm04.stdout:(23/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 685 kB/s | 85 kB 00:00 2026-03-10T05:50:44.842 INFO:teuthology.orchestra.run.vm06.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 3.2 MB 00:00 2026-03-10T05:50:44.945 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T05:50:44.960 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T05:50:44.974 INFO:teuthology.orchestra.run.vm08.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T05:50:45.149 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T05:50:45.151 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:45.212 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:45.213 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:45.245 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:45.255 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:50:45.258 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T05:50:45.261 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T05:50:45.266 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T05:50:45.276 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T05:50:45.277 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:45.314 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:45.315 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:45.330 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:45.365 INFO:teuthology.orchestra.run.vm08.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T05:50:45.403 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T05:50:45.409 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T05:50:45.434 INFO:teuthology.orchestra.run.vm08.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T05:50:45.448 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T05:50:45.455 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T05:50:45.467 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T05:50:45.473 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T05:50:45.478 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T05:50:45.483 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T05:50:45.512 INFO:teuthology.orchestra.run.vm08.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T05:50:45.530 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T05:50:45.534 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T05:50:45.542 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T05:50:45.545 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T05:50:45.575 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T05:50:45.584 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T05:50:45.595 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T05:50:45.609 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T05:50:45.617 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T05:50:45.648 INFO:teuthology.orchestra.run.vm08.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T05:50:45.654 INFO:teuthology.orchestra.run.vm08.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T05:50:45.663 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T05:50:45.692 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T05:50:45.725 INFO:teuthology.orchestra.run.vm04.stdout:(24/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 2.4 MB/s | 5.4 MB 00:02 2026-03-10T05:50:45.756 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T05:50:45.773 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T05:50:45.780 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T05:50:45.790 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T05:50:45.796 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T05:50:45.800 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T05:50:45.817 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T05:50:45.841 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T05:50:45.845 INFO:teuthology.orchestra.run.vm04.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-10T05:50:45.850 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T05:50:45.856 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T05:50:45.870 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T05:50:45.882 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T05:50:45.895 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T05:50:45.959 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T05:50:45.963 INFO:teuthology.orchestra.run.vm04.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 263 kB/s | 31 kB 00:00 2026-03-10T05:50:45.969 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T05:50:45.979 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T05:50:46.028 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T05:50:46.083 INFO:teuthology.orchestra.run.vm04.stdout:(27/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T05:50:46.258 INFO:teuthology.orchestra.run.vm04.stdout:(28/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 2.0 MB/s | 3.1 MB 00:01 2026-03-10T05:50:46.429 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T05:50:46.445 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T05:50:46.455 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T05:50:46.463 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T05:50:46.468 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T05:50:46.476 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T05:50:46.479 INFO:teuthology.orchestra.run.vm08.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T05:50:46.482 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T05:50:46.515 INFO:teuthology.orchestra.run.vm08.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T05:50:46.567 INFO:teuthology.orchestra.run.vm08.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T05:50:46.581 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T05:50:46.589 INFO:teuthology.orchestra.run.vm08.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T05:50:46.593 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T05:50:46.601 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T05:50:46.606 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T05:50:46.615 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T05:50:46.620 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T05:50:46.654 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T05:50:46.668 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T05:50:46.712 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T05:50:46.975 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T05:50:47.006 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T05:50:47.012 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T05:50:47.074 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T05:50:47.078 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T05:50:47.102 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T05:50:47.490 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T05:50:47.589 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T05:50:47.601 INFO:teuthology.orchestra.run.vm04.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 2.5 MB/s | 3.8 MB 00:01 2026-03-10T05:50:47.722 INFO:teuthology.orchestra.run.vm04.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T05:50:47.840 INFO:teuthology.orchestra.run.vm04.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 419 kB/s | 49 kB 00:00 2026-03-10T05:50:48.000 INFO:teuthology.orchestra.run.vm04.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 105 kB/s | 17 kB 00:00 2026-03-10T05:50:48.132 INFO:teuthology.orchestra.run.vm04.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 299 kB 00:00 2026-03-10T05:50:48.380 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T05:50:48.408 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T05:50:48.415 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T05:50:48.420 INFO:teuthology.orchestra.run.vm08.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T05:50:48.485 INFO:teuthology.orchestra.run.vm04.stdout:(34/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 2.1 MB/s | 769 kB 00:00 2026-03-10T05:50:48.577 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T05:50:48.580 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:50:48.614 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:50:48.618 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T05:50:48.626 INFO:teuthology.orchestra.run.vm08.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T05:50:48.763 INFO:teuthology.orchestra.run.vm04.stdout:(35/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.2 MB/s | 351 kB 00:00 2026-03-10T05:50:48.819 INFO:teuthology.orchestra.run.vm04.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 725 kB/s | 40 kB 00:00 2026-03-10T05:50:48.890 INFO:teuthology.orchestra.run.vm08.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T05:50:48.919 INFO:teuthology.orchestra.run.vm04.stdout:(37/136): libconfig-1.7.2-9.el9.x86_64.rpm 718 kB/s | 72 kB 00:00 2026-03-10T05:50:48.931 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:50:48.950 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:50:48.953 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T05:50:49.044 INFO:teuthology.orchestra.run.vm04.stdout:(38/136): libgfortran-11.5.0-14.el9.x86_64.rpm 6.3 MB/s | 794 kB 00:00 2026-03-10T05:50:49.117 INFO:teuthology.orchestra.run.vm04.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 2.5 MB/s | 184 kB 00:00 2026-03-10T05:50:49.170 INFO:teuthology.orchestra.run.vm04.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 623 kB/s | 33 kB 00:00 2026-03-10T05:50:49.228 INFO:teuthology.orchestra.run.vm04.stdout:(41/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.6 MB/s | 93 kB 00:00 2026-03-10T05:50:49.294 INFO:teuthology.orchestra.run.vm04.stdout:(42/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 3.7 MB/s | 253 kB 00:00 2026-03-10T05:50:49.402 INFO:teuthology.orchestra.run.vm04.stdout:(43/136): python3-cryptography-36.0.1-5.el9.x86 12 MB/s | 1.2 MB 00:00 2026-03-10T05:50:49.467 INFO:teuthology.orchestra.run.vm04.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 1.6 MB/s | 106 kB 00:00 2026-03-10T05:50:49.530 INFO:teuthology.orchestra.run.vm04.stdout:(45/136): python3-pycparser-2.20-6.el9.noarch.r 2.1 MB/s | 135 kB 00:00 2026-03-10T05:50:49.593 INFO:teuthology.orchestra.run.vm04.stdout:(46/136): python3-requests-2.25.1-10.el9.noarch 2.0 MB/s | 126 kB 00:00 2026-03-10T05:50:49.658 INFO:teuthology.orchestra.run.vm04.stdout:(47/136): python3-urllib3-1.26.5-7.el9.noarch.r 3.3 MB/s | 218 kB 00:00 2026-03-10T05:50:49.723 INFO:teuthology.orchestra.run.vm04.stdout:(48/136): unzip-6.0-59.el9.x86_64.rpm 2.8 MB/s | 182 kB 00:00 2026-03-10T05:50:49.782 INFO:teuthology.orchestra.run.vm04.stdout:(49/136): ceph-mgr-diskprediction-local-19.2.3- 2.1 MB/s | 7.4 MB 00:03 2026-03-10T05:50:49.789 INFO:teuthology.orchestra.run.vm04.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.9 MB/s | 266 kB 00:00 2026-03-10T05:50:49.945 INFO:teuthology.orchestra.run.vm04.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 191 kB/s | 30 kB 00:00 2026-03-10T05:50:50.025 INFO:teuthology.orchestra.run.vm04.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 429 kB/s | 104 kB 00:00 2026-03-10T05:50:50.080 INFO:teuthology.orchestra.run.vm04.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 272 kB/s | 15 kB 00:00 2026-03-10T05:50:50.139 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:50:50.145 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:50:50.176 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:50:50.191 INFO:teuthology.orchestra.run.vm04.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.4 MB/s | 164 kB 00:00 2026-03-10T05:50:50.198 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T05:50:50.261 INFO:teuthology.orchestra.run.vm04.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.2 MB/s | 160 kB 00:00 2026-03-10T05:50:50.279 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T05:50:50.316 INFO:teuthology.orchestra.run.vm04.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 830 kB/s | 45 kB 00:00 2026-03-10T05:50:50.327 INFO:teuthology.orchestra.run.vm04.stdout:(57/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 7.8 MB/s | 3.0 MB 00:00 2026-03-10T05:50:50.372 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T05:50:50.388 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T05:50:50.417 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T05:50:50.437 INFO:teuthology.orchestra.run.vm04.stdout:(58/136): librdkafka-1.6.1-102.el9.x86_64.rpm 5.4 MB/s | 662 kB 00:00 2026-03-10T05:50:50.442 INFO:teuthology.orchestra.run.vm04.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.1 MB/s | 246 kB 00:00 2026-03-10T05:50:50.455 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T05:50:50.505 INFO:teuthology.orchestra.run.vm04.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 3.4 MB/s | 233 kB 00:00 2026-03-10T05:50:50.507 INFO:teuthology.orchestra.run.vm04.stdout:(61/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.4 MB/s | 292 kB 00:00 2026-03-10T05:50:50.517 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T05:50:50.528 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T05:50:50.534 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:50:50.540 INFO:teuthology.orchestra.run.vm08.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T05:50:50.545 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T05:50:50.547 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:50:50.558 INFO:teuthology.orchestra.run.vm04.stdout:(62/136): lua-5.4.4-4.el9.x86_64.rpm 3.5 MB/s | 188 kB 00:00 2026-03-10T05:50:50.559 INFO:teuthology.orchestra.run.vm04.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 814 kB/s | 42 kB 00:00 2026-03-10T05:50:50.566 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:50:50.669 INFO:teuthology.orchestra.run.vm04.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 9.1 MB/s | 1.0 MB 00:00 2026-03-10T05:50:50.778 INFO:teuthology.orchestra.run.vm04.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 24 MB/s | 5.3 MB 00:00 2026-03-10T05:50:50.846 INFO:teuthology.orchestra.run.vm04.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 3.5 MB/s | 244 kB 00:00 2026-03-10T05:50:50.884 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T05:50:50.890 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:50:50.913 INFO:teuthology.orchestra.run.vm04.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 3.7 MB/s | 249 kB 00:00 2026-03-10T05:50:50.938 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:50:50.938 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T05:50:50.938 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T05:50:50.938 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:50.944 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:50:50.986 INFO:teuthology.orchestra.run.vm04.stdout:(68/136): python3-babel-2.9.1-2.el9.noarch.rpm 19 MB/s | 6.0 MB 00:00 2026-03-10T05:50:50.987 INFO:teuthology.orchestra.run.vm04.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 643 kB/s | 48 kB 00:00 2026-03-10T05:50:51.051 INFO:teuthology.orchestra.run.vm04.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 2.7 MB/s | 177 kB 00:00 2026-03-10T05:50:51.065 INFO:teuthology.orchestra.run.vm04.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 2.2 MB/s | 172 kB 00:00 2026-03-10T05:50:51.106 INFO:teuthology.orchestra.run.vm04.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 636 kB/s | 35 kB 00:00 2026-03-10T05:50:51.422 INFO:teuthology.orchestra.run.vm04.stdout:(73/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 17 MB/s | 6.1 MB 00:00 2026-03-10T05:50:51.475 INFO:teuthology.orchestra.run.vm04.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.4 MB/s | 77 kB 00:00 2026-03-10T05:50:51.565 INFO:teuthology.orchestra.run.vm04.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 2.9 MB/s | 267 kB 00:00 2026-03-10T05:50:51.581 INFO:teuthology.orchestra.run.vm04.stdout:(76/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 932 kB/s | 442 kB 00:00 2026-03-10T05:50:51.624 INFO:teuthology.orchestra.run.vm04.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.6 MB/s | 157 kB 00:00 2026-03-10T05:50:51.647 INFO:teuthology.orchestra.run.vm04.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 4.1 MB/s | 277 kB 00:00 2026-03-10T05:50:51.682 INFO:teuthology.orchestra.run.vm04.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 924 kB/s | 54 kB 00:00 2026-03-10T05:50:51.742 INFO:teuthology.orchestra.run.vm04.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 699 kB/s | 42 kB 00:00 2026-03-10T05:50:51.861 INFO:teuthology.orchestra.run.vm04.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 2.0 MB/s | 240 kB 00:00 2026-03-10T05:50:51.864 INFO:teuthology.orchestra.run.vm06.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 437 kB/s | 3.4 MB 00:08 2026-03-10T05:50:51.868 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:51.868 INFO:teuthology.orchestra.run.vm06.stdout:Total 8.5 MB/s | 210 MB 00:24 2026-03-10T05:50:51.923 INFO:teuthology.orchestra.run.vm04.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 605 kB/s | 37 kB 00:00 2026-03-10T05:50:51.991 INFO:teuthology.orchestra.run.vm04.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 974 kB/s | 66 kB 00:00 2026-03-10T05:50:52.065 INFO:teuthology.orchestra.run.vm04.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 4.0 MB/s | 303 kB 00:00 2026-03-10T05:50:52.127 INFO:teuthology.orchestra.run.vm04.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.0 MB/s | 64 kB 00:00 2026-03-10T05:50:52.305 INFO:teuthology.orchestra.run.vm04.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 125 kB/s | 22 kB 00:00 2026-03-10T05:50:52.438 INFO:teuthology.orchestra.run.vm04.stdout:(87/136): protobuf-compiler-3.14.0-17.el9.x86_6 6.4 MB/s | 862 kB 00:00 2026-03-10T05:50:52.519 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T05:50:52.574 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T05:50:52.574 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T05:50:52.601 INFO:teuthology.orchestra.run.vm04.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 3.3 MB/s | 551 kB 00:00 2026-03-10T05:50:52.633 INFO:teuthology.orchestra.run.vm04.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 9.5 MB/s | 308 kB 00:00 2026-03-10T05:50:52.656 INFO:teuthology.orchestra.run.vm04.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 856 kB/s | 19 kB 00:00 2026-03-10T05:50:52.765 INFO:teuthology.orchestra.run.vm04.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 41 MB/s | 4.4 MB 00:00 2026-03-10T05:50:52.791 INFO:teuthology.orchestra.run.vm04.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 978 kB/s | 25 kB 00:00 2026-03-10T05:50:52.861 INFO:teuthology.orchestra.run.vm04.stdout:(93/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 16 MB/s | 19 MB 00:01 2026-03-10T05:50:52.861 INFO:teuthology.orchestra.run.vm04.stdout:(94/136): liboath-2.6.12-1.el9.x86_64.rpm 700 kB/s | 49 kB 00:00 2026-03-10T05:50:52.886 INFO:teuthology.orchestra.run.vm04.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 5.9 MB/s | 151 kB 00:00 2026-03-10T05:50:52.919 INFO:teuthology.orchestra.run.vm04.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 25 MB/s | 838 kB 00:00 2026-03-10T05:50:52.948 INFO:teuthology.orchestra.run.vm04.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 19 MB/s | 548 kB 00:00 2026-03-10T05:50:52.957 INFO:teuthology.orchestra.run.vm04.stdout:(98/136): libunwind-1.6.2-1.el9.x86_64.rpm 702 kB/s | 67 kB 00:00 2026-03-10T05:50:52.971 INFO:teuthology.orchestra.run.vm04.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 1.3 MB/s | 29 kB 00:00 2026-03-10T05:50:52.993 INFO:teuthology.orchestra.run.vm04.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 1.6 MB/s | 60 kB 00:00 2026-03-10T05:50:52.993 INFO:teuthology.orchestra.run.vm04.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.9 MB/s | 43 kB 00:00 2026-03-10T05:50:53.019 INFO:teuthology.orchestra.run.vm04.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 551 kB/s | 14 kB 00:00 2026-03-10T05:50:53.021 INFO:teuthology.orchestra.run.vm04.stdout:(103/136): python3-cachetools-4.2.4-1.el9.noarc 1.1 MB/s | 32 kB 00:00 2026-03-10T05:50:53.044 INFO:teuthology.orchestra.run.vm04.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 7.0 MB/s | 173 kB 00:00 2026-03-10T05:50:53.069 INFO:teuthology.orchestra.run.vm04.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 9.8 MB/s | 254 kB 00:00 2026-03-10T05:50:53.101 INFO:teuthology.orchestra.run.vm04.stdout:(106/136): python3-cherrypy-18.6.1-2.el9.noarch 4.4 MB/s | 358 kB 00:00 2026-03-10T05:50:53.252 INFO:teuthology.orchestra.run.vm04.stdout:(107/136): ceph-test-19.2.3-678.ge911bdeb.el9.x 3.5 MB/s | 50 MB 00:14 2026-03-10T05:50:53.254 INFO:teuthology.orchestra.run.vm04.stdout:(108/136): python3-grpcio-tools-1.46.7-10.el9.x 940 kB/s | 144 kB 00:00 2026-03-10T05:50:53.261 INFO:teuthology.orchestra.run.vm04.stdout:(109/136): python3-grpcio-1.46.7-10.el9.x86_64. 11 MB/s | 2.0 MB 00:00 2026-03-10T05:50:53.277 INFO:teuthology.orchestra.run.vm04.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 787 kB/s | 18 kB 00:00 2026-03-10T05:50:53.285 INFO:teuthology.orchestra.run.vm04.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 984 kB/s | 23 kB 00:00 2026-03-10T05:50:53.307 INFO:teuthology.orchestra.run.vm04.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 674 kB/s | 20 kB 00:00 2026-03-10T05:50:53.309 INFO:teuthology.orchestra.run.vm04.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 822 kB/s | 19 kB 00:00 2026-03-10T05:50:53.330 INFO:teuthology.orchestra.run.vm04.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 1.1 MB/s | 26 kB 00:00 2026-03-10T05:50:53.356 INFO:teuthology.orchestra.run.vm04.stdout:(115/136): python3-logutils-0.3.5-21.el9.noarch 1.7 MB/s | 46 kB 00:00 2026-03-10T05:50:53.387 INFO:teuthology.orchestra.run.vm04.stdout:(116/136): python3-more-itertools-8.12.0-2.el9. 2.5 MB/s | 79 kB 00:00 2026-03-10T05:50:53.397 INFO:teuthology.orchestra.run.vm04.stdout:(117/136): python3-kubernetes-26.1.0-3.el9.noar 12 MB/s | 1.0 MB 00:00 2026-03-10T05:50:53.412 INFO:teuthology.orchestra.run.vm04.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 2.3 MB/s | 58 kB 00:00 2026-03-10T05:50:53.435 INFO:teuthology.orchestra.run.vm04.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 729 kB/s | 16 kB 00:00 2026-03-10T05:50:53.437 INFO:teuthology.orchestra.run.vm04.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 6.8 MB/s | 272 kB 00:00 2026-03-10T05:50:53.456 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T05:50:53.457 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T05:50:53.464 INFO:teuthology.orchestra.run.vm04.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 3.1 MB/s | 90 kB 00:00 2026-03-10T05:50:53.465 INFO:teuthology.orchestra.run.vm04.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 1.1 MB/s | 31 kB 00:00 2026-03-10T05:50:53.490 INFO:teuthology.orchestra.run.vm04.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 2.3 MB/s | 59 kB 00:00 2026-03-10T05:50:53.503 INFO:teuthology.orchestra.run.vm04.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 4.8 MB/s | 188 kB 00:00 2026-03-10T05:50:53.514 INFO:teuthology.orchestra.run.vm04.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 1.5 MB/s | 36 kB 00:00 2026-03-10T05:50:53.532 INFO:teuthology.orchestra.run.vm04.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 2.9 MB/s | 86 kB 00:00 2026-03-10T05:50:53.546 INFO:teuthology.orchestra.run.vm04.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 7.0 MB/s | 230 kB 00:00 2026-03-10T05:50:53.561 INFO:teuthology.orchestra.run.vm04.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 3.1 MB/s | 90 kB 00:00 2026-03-10T05:50:53.584 INFO:teuthology.orchestra.run.vm04.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 961 kB/s | 22 kB 00:00 2026-03-10T05:50:53.591 INFO:teuthology.orchestra.run.vm04.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 9.5 MB/s | 427 kB 00:00 2026-03-10T05:50:53.609 INFO:teuthology.orchestra.run.vm04.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 840 kB/s | 20 kB 00:00 2026-03-10T05:50:53.620 INFO:teuthology.orchestra.run.vm04.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 6.5 MB/s | 191 kB 00:00 2026-03-10T05:50:53.714 INFO:teuthology.orchestra.run.vm04.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 15 MB/s | 1.6 MB 00:00 2026-03-10T05:50:54.459 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T05:50:54.477 INFO:teuthology.orchestra.run.vm04.stdout:(134/136): python3-jaraco-8.2.1-3.el9.noarch.rp 8.7 kB/s | 11 kB 00:01 2026-03-10T05:50:54.477 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T05:50:54.492 INFO:teuthology.orchestra.run.vm06.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T05:50:54.674 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T05:50:54.723 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:54.793 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:54.794 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:54.827 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:54.838 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:50:54.842 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T05:50:54.846 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T05:50:54.851 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T05:50:54.862 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T05:50:54.864 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:54.903 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:54.905 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:54.922 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:54.967 INFO:teuthology.orchestra.run.vm06.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T05:50:55.011 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T05:50:55.016 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T05:50:55.043 INFO:teuthology.orchestra.run.vm06.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T05:50:55.060 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T05:50:55.071 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T05:50:55.084 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T05:50:55.091 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T05:50:55.097 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T05:50:55.103 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T05:50:55.135 INFO:teuthology.orchestra.run.vm06.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T05:50:55.155 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T05:50:55.161 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T05:50:55.169 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T05:50:55.173 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T05:50:55.205 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T05:50:55.212 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T05:50:55.223 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T05:50:55.238 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T05:50:55.248 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T05:50:55.257 INFO:teuthology.orchestra.run.vm04.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 2.1 MB/s | 3.4 MB 00:01 2026-03-10T05:50:55.285 INFO:teuthology.orchestra.run.vm06.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T05:50:55.291 INFO:teuthology.orchestra.run.vm06.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T05:50:55.300 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T05:50:55.332 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T05:50:55.396 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T05:50:55.397 INFO:teuthology.orchestra.run.vm04.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 1.9 MB/s | 3.2 MB 00:01 2026-03-10T05:50:55.400 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:50:55.401 INFO:teuthology.orchestra.run.vm04.stdout:Total 9.1 MB/s | 210 MB 00:23 2026-03-10T05:50:55.414 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T05:50:55.422 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T05:50:55.434 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T05:50:55.441 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T05:50:55.446 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T05:50:55.463 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T05:50:55.491 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T05:50:55.499 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T05:50:55.506 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T05:50:55.521 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T05:50:55.534 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T05:50:55.546 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T05:50:55.620 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T05:50:55.630 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T05:50:55.642 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T05:50:55.695 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T05:50:56.032 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T05:50:56.087 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T05:50:56.087 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T05:50:56.104 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T05:50:56.126 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T05:50:56.134 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T05:50:56.143 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T05:50:56.149 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T05:50:56.158 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T05:50:56.162 INFO:teuthology.orchestra.run.vm06.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T05:50:56.165 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T05:50:56.199 INFO:teuthology.orchestra.run.vm06.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T05:50:56.253 INFO:teuthology.orchestra.run.vm06.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T05:50:56.268 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T05:50:56.279 INFO:teuthology.orchestra.run.vm06.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T05:50:56.339 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T05:50:56.402 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T05:50:56.459 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T05:50:56.515 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T05:50:56.562 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T05:50:56.622 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T05:50:56.739 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T05:50:56.813 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T05:50:56.925 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T05:50:56.925 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T05:50:57.142 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T05:50:57.173 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T05:50:57.179 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T05:50:57.242 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T05:50:57.245 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T05:50:57.270 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T05:50:57.626 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:57.667 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T05:50:57.751 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:50:57.756 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:50:57.778 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:58.011 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T05:50:58.016 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:50:58.025 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T05:50:58.039 INFO:teuthology.orchestra.run.vm04.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:50:58.042 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:58.054 INFO:teuthology.orchestra.run.vm08.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T05:50:58.057 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T05:50:58.078 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:50:58.078 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'qat' with GID 994. 2026-03-10T05:50:58.078 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T05:50:58.078 INFO:teuthology.orchestra.run.vm08.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T05:50:58.078 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:58.091 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:50:58.115 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:50:58.115 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T05:50:58.115 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:58.161 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T05:50:58.213 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T05:50:58.219 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:58.248 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T05:50:58.253 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:50:58.268 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:50:58.268 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:50:58.268 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T05:50:58.268 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:58.283 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:50:58.286 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:58.317 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T05:50:58.326 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:50:58.330 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T05:50:58.332 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T05:50:58.337 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T05:50:58.347 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T05:50:58.349 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:58.386 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:50:58.389 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:58.404 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T05:50:58.439 INFO:teuthology.orchestra.run.vm04.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T05:50:58.479 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T05:50:58.484 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T05:50:58.510 INFO:teuthology.orchestra.run.vm04.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T05:50:58.524 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T05:50:58.533 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T05:50:58.545 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T05:50:58.550 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T05:50:58.557 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T05:50:58.565 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T05:50:58.573 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T05:50:58.575 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T05:50:58.582 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T05:50:58.588 INFO:teuthology.orchestra.run.vm06.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T05:50:58.601 INFO:teuthology.orchestra.run.vm04.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T05:50:58.619 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T05:50:58.623 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T05:50:58.630 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T05:50:58.633 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T05:50:58.663 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T05:50:58.670 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T05:50:58.681 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T05:50:58.695 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T05:50:58.704 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T05:50:58.733 INFO:teuthology.orchestra.run.vm04.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T05:50:58.739 INFO:teuthology.orchestra.run.vm04.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T05:50:58.747 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T05:50:58.750 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T05:50:58.754 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:50:58.777 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T05:50:58.788 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:50:58.792 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T05:50:58.801 INFO:teuthology.orchestra.run.vm06.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T05:50:58.841 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T05:50:58.858 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T05:50:58.867 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T05:50:58.877 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T05:50:58.884 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T05:50:58.890 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T05:50:58.908 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T05:50:58.935 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T05:50:58.941 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T05:50:58.949 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T05:50:58.966 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T05:50:58.980 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T05:50:58.994 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T05:50:59.058 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T05:50:59.059 INFO:teuthology.orchestra.run.vm06.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T05:50:59.063 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:50:59.067 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T05:50:59.071 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:50:59.078 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T05:50:59.083 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:50:59.085 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T05:50:59.097 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:50:59.098 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:50:59.098 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T05:50:59.098 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:50:59.098 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:50:59.098 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:50:59.128 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T05:50:59.161 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:50:59.164 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:50:59.170 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T05:50:59.194 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T05:50:59.197 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:50:59.514 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T05:50:59.530 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T05:50:59.536 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T05:50:59.543 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T05:50:59.548 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T05:50:59.555 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T05:50:59.560 INFO:teuthology.orchestra.run.vm04.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T05:50:59.562 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T05:50:59.593 INFO:teuthology.orchestra.run.vm04.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T05:50:59.644 INFO:teuthology.orchestra.run.vm04.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T05:50:59.658 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T05:50:59.666 INFO:teuthology.orchestra.run.vm04.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T05:50:59.671 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T05:50:59.678 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T05:50:59.684 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T05:50:59.693 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T05:50:59.699 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T05:50:59.732 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T05:50:59.737 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:50:59.742 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:50:59.746 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T05:50:59.791 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T05:51:00.064 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T05:51:00.096 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T05:51:00.102 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T05:51:00.162 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T05:51:00.165 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T05:51:00.189 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T05:51:00.222 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:00.227 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:00.251 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:00.265 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:51:00.267 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:00.269 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T05:51:00.292 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T05:51:00.332 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:00.389 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T05:51:00.391 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:00.394 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:00.413 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:00.454 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:00.463 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T05:51:00.468 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:00.492 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T05:51:00.530 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T05:51:00.572 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T05:51:00.592 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T05:51:00.604 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T05:51:00.610 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:51:00.617 INFO:teuthology.orchestra.run.vm06.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T05:51:00.623 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T05:51:00.625 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:51:00.644 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:51:00.664 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T05:51:00.953 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T05:51:00.959 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:51:00.986 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T05:51:00.989 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:00.999 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:51:00.999 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T05:51:00.999 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T05:51:00.999 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:01.004 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:01.012 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:01.024 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:01.046 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:01.046 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:01.046 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T05:51:01.046 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:01.203 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:01.227 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:01.486 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T05:51:01.513 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T05:51:01.520 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T05:51:01.525 INFO:teuthology.orchestra.run.vm04.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T05:51:01.685 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T05:51:01.688 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:51:01.724 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T05:51:01.728 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T05:51:01.736 INFO:teuthology.orchestra.run.vm04.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T05:51:01.994 INFO:teuthology.orchestra.run.vm04.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T05:51:01.997 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:51:02.017 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T05:51:02.020 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T05:51:03.143 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:03.149 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:03.172 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T05:51:03.189 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T05:51:03.209 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T05:51:03.299 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T05:51:03.313 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T05:51:03.342 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T05:51:03.379 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T05:51:03.442 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T05:51:03.452 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T05:51:03.458 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:51:03.464 INFO:teuthology.orchestra.run.vm04.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T05:51:03.469 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T05:51:03.471 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:51:03.491 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T05:51:03.769 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T05:51:03.780 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T05:51:03.786 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T05:51:03.805 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T05:51:03.844 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T05:51:03.904 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:51:03.909 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:03.913 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T05:51:03.913 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:03.930 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:03.930 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:03.952 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T05:51:03.952 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T05:51:03.952 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T05:51:03.952 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:03.957 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:51:05.300 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T05:51:05.301 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T05:51:05.302 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T05:51:05.303 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T05:51:05.304 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T05:51:05.305 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T05:51:05.306 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:05.306 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T05:51:05.306 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout:Upgraded: 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T05:51:05.413 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T05:51:05.414 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:51:05.415 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T05:51:05.516 DEBUG:teuthology.parallel:result is None 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /sys 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /proc 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /mnt 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /var/tmp 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /home 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /root 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /tmp 2026-03-10T05:51:07.324 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:07.450 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:51:07.477 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:07.708 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:51:07.733 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:07.741 INFO:teuthology.orchestra.run.vm06.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T05:51:07.743 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T05:51:07.761 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:07.761 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'qat' with GID 994. 2026-03-10T05:51:07.761 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T05:51:07.761 INFO:teuthology.orchestra.run.vm06.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T05:51:07.761 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:07.772 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:07.799 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:07.799 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T05:51:07.799 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:07.841 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T05:51:07.914 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T05:51:07.920 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:51:07.935 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:51:07.935 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:07.935 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T05:51:07.936 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:08.733 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:51:08.757 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:08.820 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:51:08.823 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:51:08.830 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T05:51:08.854 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T05:51:08.857 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:51:09.441 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:51:09.582 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:51:10.110 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:51:10.113 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:10.178 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:10.239 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T05:51:10.243 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:10.264 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:10.265 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.265 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T05:51:10.265 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:10.265 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:10.265 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:10.279 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:10.291 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-10T05:51:10.306 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:10.427 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T05:51:10.450 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:10.679 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T05:51:10.700 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:10.709 INFO:teuthology.orchestra.run.vm04.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T05:51:10.712 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T05:51:10.730 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:10.730 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'qat' with GID 994. 2026-03-10T05:51:10.730 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T05:51:10.730 INFO:teuthology.orchestra.run.vm04.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T05:51:10.730 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:10.741 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:10.768 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T05:51:10.768 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T05:51:10.768 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:10.803 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T05:51:10.806 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:10.811 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T05:51:10.827 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:10.827 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.827 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T05:51:10.828 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:10.828 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:10.828 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:10.839 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:10.860 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:10.860 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.860 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T05:51:10.860 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:10.893 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T05:51:10.898 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:51:10.912 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T05:51:10.912 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:10.912 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T05:51:10.912 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:11.011 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:11.033 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:11.684 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T05:51:11.708 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:11.765 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:51:11.768 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T05:51:11.774 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T05:51:11.797 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T05:51:11.800 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:51:12.331 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T05:51:12.337 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:51:12.844 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T05:51:12.847 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:12.912 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T05:51:12.970 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T05:51:12.973 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T05:51:12.997 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:13.012 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:13.023 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T05:51:13.547 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T05:51:13.552 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T05:51:13.576 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:13.588 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:13.589 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T05:51:13.602 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T05:51:13.607 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T05:51:13.610 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T05:51:13.610 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:13.610 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T05:51:13.610 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:13.664 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T05:51:13.674 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:13.679 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T05:51:13.679 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:13.698 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:13.698 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:13.771 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:13.792 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T05:51:13.793 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T05:51:13.793 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T05:51:13.793 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:13.793 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T05:51:13.793 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:15.081 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:15.081 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T05:51:15.081 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T05:51:15.082 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T05:51:15.083 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T05:51:15.084 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T05:51:15.085 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T05:51:15.086 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T05:51:15.087 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout:Upgraded: 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout:Installed: 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.191 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T05:51:15.192 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T05:51:15.193 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: zip-3.0-35.el9.x86_64 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:51:15.194 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T05:51:15.277 DEBUG:teuthology.parallel:result is None 2026-03-10T05:51:16.342 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T05:51:16.353 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T05:51:16.359 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T05:51:16.416 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T05:51:16.427 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:16.431 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T05:51:16.431 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:16.446 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T05:51:16.446 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T05:51:17.809 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T05:51:17.810 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T05:51:17.811 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T05:51:17.811 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T05:51:17.811 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T05:51:17.811 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T05:51:17.812 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T05:51:17.813 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T05:51:17.814 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T05:51:17.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T05:51:17.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T05:51:17.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T05:51:17.815 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout:Upgraded: 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.908 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T05:51:17.909 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T05:51:17.910 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:51:17.911 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T05:51:17.996 DEBUG:teuthology.parallel:result is None 2026-03-10T05:51:17.996 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:18.667 DEBUG:teuthology.orchestra.run.vm04:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T05:51:18.686 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:18.686 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:18.686 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T05:51:18.688 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:19.302 DEBUG:teuthology.orchestra.run.vm06:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T05:51:19.320 INFO:teuthology.orchestra.run.vm06.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:19.321 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:19.321 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T05:51:19.322 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:19.975 DEBUG:teuthology.orchestra.run.vm08:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T05:51:19.996 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:19.996 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T05:51:19.996 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T05:51:19.997 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T05:51:19.997 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:51:19.997 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T05:51:20.024 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:51:20.024 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T05:51:20.050 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:51:20.050 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T05:51:20.078 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T05:51:20.079 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:51:20.079 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T05:51:20.103 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T05:51:20.168 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:51:20.168 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T05:51:20.195 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T05:51:20.259 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:51:20.259 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T05:51:20.281 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T05:51:20.345 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T05:51:20.346 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:51:20.346 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T05:51:20.369 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T05:51:20.432 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:51:20.432 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T05:51:20.455 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T05:51:20.518 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:51:20.518 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T05:51:20.543 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T05:51:20.608 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T05:51:20.608 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:51:20.608 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T05:51:20.631 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T05:51:20.695 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:51:20.695 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T05:51:20.720 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T05:51:20.788 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:51:20.788 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T05:51:20.813 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T05:51:20.876 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN', 'mons down', 'mon down', 'out of quorum', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Cluster fsid is 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.104', 'mon.b': '192.168.123.106', 'mon.c': '192.168.123.108'} 2026-03-10T05:51:20.939 INFO:tasks.cephadm:First mon is mon.a on vm04 2026-03-10T05:51:20.939 INFO:tasks.cephadm:First mgr is a 2026-03-10T05:51:20.939 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T05:51:20.939 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-10T05:51:20.962 DEBUG:teuthology.orchestra.run.vm06:> sudo hostname $(hostname -s) 2026-03-10T05:51:20.986 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T05:51:21.011 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T05:51:21.011 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:21.609 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T05:51:22.187 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:51:22.188 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T05:51:22.188 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T05:51:22.188 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:23.691 INFO:teuthology.orchestra.run.vm04.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 05:51 /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:23.691 DEBUG:teuthology.orchestra.run.vm06:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:25.172 INFO:teuthology.orchestra.run.vm06.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 05:51 /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:25.172 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:26.635 INFO:teuthology.orchestra.run.vm08.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 05:51 /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:26.635 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:26.650 DEBUG:teuthology.orchestra.run.vm06:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:26.665 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T05:51:26.684 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T05:51:26.684 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T05:51:26.691 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T05:51:26.706 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T05:51:26.852 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T05:51:26.862 INFO:teuthology.orchestra.run.vm06.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T05:51:26.881 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout:{ 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout: "repo_digests": [ 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout: ] 2026-03-10T05:52:08.859 INFO:teuthology.orchestra.run.vm06.stdout:} 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-10T05:52:10.817 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T05:52:48.818 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T05:52:48.835 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-10T05:52:48.860 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph 2026-03-10T05:52:48.887 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T05:52:48.914 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-10T05:52:48.937 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /etc/ceph 2026-03-10T05:52:48.963 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T05:52:48.989 INFO:tasks.cephadm:Writing seed config... 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T05:52:48.989 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T05:52:48.990 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:52:48.990 DEBUG:teuthology.orchestra.run.vm04:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T05:52:49.004 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T05:52:49.004 DEBUG:teuthology.orchestra.run.vm04:mon.a> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service 2026-03-10T05:52:49.046 DEBUG:teuthology.orchestra.run.vm04:mgr.a> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a.service 2026-03-10T05:52:49.088 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T05:52:49.088 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.104 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:49.225 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-10T05:52:49.226 INFO:teuthology.orchestra.run.vm04.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '2a12cf18-1c45-11f1-9f2e-3f4ab8754027', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.104', '--skip-admin-label'] 2026-03-10T05:52:49.226 INFO:teuthology.orchestra.run.vm04.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T05:52:49.226 INFO:teuthology.orchestra.run.vm04.stdout:Verifying podman|docker is present... 2026-03-10T05:52:49.246 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-10T05:52:49.246 INFO:teuthology.orchestra.run.vm04.stdout:Verifying lvm2 is present... 2026-03-10T05:52:49.246 INFO:teuthology.orchestra.run.vm04.stdout:Verifying time synchronization is in place... 2026-03-10T05:52:49.254 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T05:52:49.254 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T05:52:49.262 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T05:52:49.262 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T05:52:49.269 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-10T05:52:49.275 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-10T05:52:49.275 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-10T05:52:49.275 INFO:teuthology.orchestra.run.vm04.stdout:Repeating the final host check... 2026-03-10T05:52:49.294 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 5.8.0 2026-03-10T05:52:49.295 INFO:teuthology.orchestra.run.vm04.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T05:52:49.295 INFO:teuthology.orchestra.run.vm04.stdout:systemctl is present 2026-03-10T05:52:49.295 INFO:teuthology.orchestra.run.vm04.stdout:lvcreate is present 2026-03-10T05:52:49.300 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T05:52:49.300 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T05:52:49.305 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T05:52:49.305 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T05:52:49.311 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Unit chronyd.service is enabled and running 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Host looks OK 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Cluster fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Acquiring lock 140325612611376 on /run/cephadm/2a12cf18-1c45-11f1-9f2e-3f4ab8754027.lock 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Lock 140325612611376 acquired on /run/cephadm/2a12cf18-1c45-11f1-9f2e-3f4ab8754027.lock 2026-03-10T05:52:49.316 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 3300 ... 2026-03-10T05:52:49.317 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 6789 ... 2026-03-10T05:52:49.317 INFO:teuthology.orchestra.run.vm04.stdout:Base mon IP(s) is [192.168.123.104:3300, 192.168.123.104:6789], mon addrv is [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T05:52:49.319 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.104 metric 100 2026-03-10T05:52:49.319 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.104 metric 100 2026-03-10T05:52:49.321 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T05:52:49.321 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T05:52:49.323 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:4/64 scope link noprefixroute 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-10T05:52:49.324 INFO:teuthology.orchestra.run.vm04.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T05:52:49.325 INFO:teuthology.orchestra.run.vm04.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T05:52:49.325 INFO:teuthology.orchestra.run.vm04.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T05:52:50.545 INFO:teuthology.orchestra.run.vm04.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T05:52:50.836 INFO:teuthology.orchestra.run.vm04.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T05:52:50.836 INFO:teuthology.orchestra.run.vm04.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T05:52:50.836 INFO:teuthology.orchestra.run.vm04.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T05:52:51.053 INFO:teuthology.orchestra.run.vm04.stdout:stat: stdout 167 167 2026-03-10T05:52:51.053 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial keys... 2026-03-10T05:52:51.494 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCzsa9pj+NkCRAANkmKx8zSd+kNCb2K8JMg/w== 2026-03-10T05:52:51.851 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCzsa9p9b9YKxAA1Oin9+ObZja3uCjv4layVg== 2026-03-10T05:52:52.067 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCzsa9pLCVbOBAADicXltR5sJ632KTDHMeT+Q== 2026-03-10T05:52:52.068 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial monmap... 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:monmaptool for a [v2:192.168.123.104:3300,v1:192.168.123.104:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:setting min_mon_release = quincy 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: set fsid to 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:52.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T05:52:52.291 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:52:52.291 INFO:teuthology.orchestra.run.vm04.stdout:Creating mon... 2026-03-10T05:52:52.539 INFO:teuthology.orchestra.run.vm04.stdout:create mon.a on 2026-03-10T05:52:52.695 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T05:52:52.810 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T05:52:52.936 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target → /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target. 2026-03-10T05:52:52.936 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target → /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target. 2026-03-10T05:52:53.066 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a 2026-03-10T05:52:53.066 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service: Unit ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service not loaded. 2026-03-10T05:52:53.201 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target.wants/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service → /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@.service. 2026-03-10T05:52:53.365 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T05:52:53.365 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T05:52:53.365 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon to start... 2026-03-10T05:52:53.365 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon... 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout id: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout services: 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.148294s) 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout data: 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.690 INFO:teuthology.orchestra.run.vm04.stdout:mon is available 2026-03-10T05:52:53.691 INFO:teuthology.orchestra.run.vm04.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T05:52:53.971 INFO:teuthology.orchestra.run.vm04.stdout:Generating new minimal ceph.conf... 2026-03-10T05:52:54.268 INFO:teuthology.orchestra.run.vm04.stdout:Restarting the monitor... 2026-03-10T05:52:54.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a[50507]: 2026-03-10T05:52:54.337+0000 7fb9b9845640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T05:52:54.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50806]: 2026-03-10 05:52:54.554785887 +0000 UTC m=+0.230499346 container died 6bb26abd8373517b209a8b277fc7f31017feecbc2397836dcc406763067542bd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid) 2026-03-10T05:52:54.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50806]: 2026-03-10 05:52:54.672576497 +0000 UTC m=+0.348289956 container remove 6bb26abd8373517b209a8b277fc7f31017feecbc2397836dcc406763067542bd (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-10T05:52:54.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 bash[50806]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a 2026-03-10T05:52:54.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 systemd[1]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service: Deactivated successfully. 2026-03-10T05:52:54.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 systemd[1]: Stopped Ceph mon.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027. 2026-03-10T05:52:54.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 systemd[1]: Starting Ceph mon.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T05:52:54.858 INFO:teuthology.orchestra.run.vm04.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50884]: 2026-03-10 05:52:54.811790551 +0000 UTC m=+0.015612969 container create f5dff0ec46a71567a55c908e4128afc7401333a71b37efe4d5b71127725a0e65 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50884]: 2026-03-10 05:52:54.84103075 +0000 UTC m=+0.044853168 container init f5dff0ec46a71567a55c908e4128afc7401333a71b37efe4d5b71127725a0e65 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50884]: 2026-03-10 05:52:54.844782712 +0000 UTC m=+0.048605121 container start f5dff0ec46a71567a55c908e4128afc7401333a71b37efe4d5b71127725a0e65 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 bash[50884]: f5dff0ec46a71567a55c908e4128afc7401333a71b37efe4d5b71127725a0e65 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 podman[50884]: 2026-03-10 05:52:54.805535967 +0000 UTC m=+0.009358385 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 systemd[1]: Started Ceph mon.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027. 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: pidfile_write: ignore empty --pid-file 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: load: jerasure load: lrc 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: RocksDB version: 7.9.2 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Git sha 0 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: DB SUMMARY 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: DB Session ID: PDSAPCTG2YL6EUHLM4PT 2026-03-10T05:52:55.173 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: CURRENT file: CURRENT 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.error_if_exists: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.create_if_missing: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.paranoid_checks: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.env: 0x560a3d7fddc0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.info_log: 0x560a3fb56700 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.statistics: (nil) 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.use_fsync: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_log_file_size: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_fallocate: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.use_direct_reads: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.db_log_dir: 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.wal_dir: 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.write_buffer_manager: 0x560a3fb5b900 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.unordered_write: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.row_cache: None 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.wal_filter: None 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.two_write_queues: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.wal_compression: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.atomic_flush: 0 2026-03-10T05:52:55.174 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.log_readahead_size: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_background_jobs: 2 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_background_compactions: -1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_subcompactions: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_open_files: -1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_background_flushes: -1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Compression algorithms supported: 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kZSTD supported: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kXpressCompression supported: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kBZip2Compression supported: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kLZ4Compression supported: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kZlibCompression supported: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: kSnappyCompression supported: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.merge_operator: 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_filter: None 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560a3fb56640) 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: pin_top_level_index_and_filter: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: index_type: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_index_type: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: index_shortening: 1 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: checksum: 4 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: no_block_cache: 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache: 0x560a3fb7b350 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_name: BinnedLRUCache 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_options: 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: capacity : 536870912 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: num_shard_bits : 4 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: strict_capacity_limit : 0 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: high_pri_pool_ratio: 0.000 2026-03-10T05:52:55.175 INFO:journalctl@ceph.mon.a.vm04.stdout: block_cache_compressed: (nil) 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: persistent_cache: (nil) 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size: 4096 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: block_size_deviation: 10 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: block_restart_interval: 16 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: index_block_restart_interval: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: metadata_block_size: 4096 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: partition_filters: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: use_delta_encoding: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: filter_policy: bloomfilter 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: whole_key_filtering: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: verify_compression: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: read_amp_bytes_per_bit: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: format_version: 5 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: enable_index_compression: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: block_align: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: max_auto_readahead_size: 262144 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: prepopulate_block_cache: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: initial_auto_readahead_size: 8192 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression: NoCompression 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.num_levels: 7 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T05:52:55.176 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.inplace_update_support: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.bloom_locality: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.max_successive_merges: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.ttl: 2592000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enable_blob_files: false 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.min_blob_size: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 986b29ad-5d4d-424e-9f23-17dc72ddacb4 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773121974866569, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773121974868110, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773121974, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "986b29ad-5d4d-424e-9f23-17dc72ddacb4", "db_session_id": "PDSAPCTG2YL6EUHLM4PT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773121974868158, "job": 1, "event": "recovery_finished"} 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560a3fb7ce00 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: DB pointer 0x560a3fc92000 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: ** DB Stats ** 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T05:52:55.177 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: L0 2/0 72.77 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 50.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Sum 2/0 72.77 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 50.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 50.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: ** Compaction Stats [default] ** 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 50.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Cumulative compaction: 0.00 GB write, 12.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Interval compaction: 0.00 GB write, 12.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache BinnedLRUCache@0x560a3fb7b350#7 capacity: 512.00 MB usage: 26.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: starting mon.a rank 0 at public addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] at bind addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: mon.a@-1(???) e1 preinit fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: monmap epoch 1 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: last_changed 2026-03-10T05:52:52.167191+0000 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: min_mon_release 19 (squid) 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: election_strategy: 1 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: fsmap 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T05:52:55.178 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:54 vm04 ceph-mon[50920]: mgrmap e1: no daemons active 2026-03-10T05:52:55.178 INFO:teuthology.orchestra.run.vm04.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T05:52:55.178 INFO:teuthology.orchestra.run.vm04.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:55.178 INFO:teuthology.orchestra.run.vm04.stdout:Creating mgr... 2026-03-10T05:52:55.178 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T05:52:55.178 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T05:52:55.319 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a 2026-03-10T05:52:55.319 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a.service: Unit ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a.service not loaded. 2026-03-10T05:52:55.431 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027.target.wants/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a.service → /etc/systemd/system/ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@.service. 2026-03-10T05:52:55.447 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 systemd[1]: Starting Ceph mgr.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr to start... 2026-03-10T05:52:55.592 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr... 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 podman[51142]: 2026-03-10 05:52:55.535843054 +0000 UTC m=+0.014855842 container create 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 podman[51142]: 2026-03-10 05:52:55.574762958 +0000 UTC m=+0.053775756 container init 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid) 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 podman[51142]: 2026-03-10 05:52:55.578921541 +0000 UTC m=+0.057934339 container start 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 bash[51142]: 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 podman[51142]: 2026-03-10 05:52:55.529403493 +0000 UTC m=+0.008416300 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 systemd[1]: Started Ceph mgr.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027. 2026-03-10T05:52:55.711 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:55.672+0000 7ff7862a1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2a12cf18-1c45-11f1-9f2e-3f4ab8754027", 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T05:52:55.894 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T05:52:53:392948+0000", 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T05:52:53.393498+0000", 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:52:55.895 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (1/15)... 2026-03-10T05:52:56.046 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:55.715+0000 7ff7862a1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:52:56.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/520650513' entity='client.admin' 2026-03-10T05:52:56.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/4016561319' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:52:56.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:56.104+0000 7ff7862a1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:56.399+0000 7ff7862a1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: from numpy import show_config as show_numpy_config 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:56.478+0000 7ff7862a1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:56.512+0000 7ff7862a1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:52:56.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:56.576+0000 7ff7862a1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:52:57.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.052+0000 7ff7862a1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:52:57.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.161+0000 7ff7862a1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:57.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.202+0000 7ff7862a1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:52:57.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.238+0000 7ff7862a1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:57.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.279+0000 7ff7862a1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:52:57.753 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.318+0000 7ff7862a1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:52:57.753 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.489+0000 7ff7862a1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:52:57.753 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.539+0000 7ff7862a1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:57.753 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:57 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:57.751+0000 7ff7862a1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2a12cf18-1c45-11f1-9f2e-3f4ab8754027", 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T05:52:58.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.208 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T05:52:53:392948+0000", 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T05:52:53.393498+0000", 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:52:58.209 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (2/15)... 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/1199438454' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.038+0000 7ff7862a1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.081+0000 7ff7862a1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.128+0000 7ff7862a1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.203+0000 7ff7862a1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:52:58.308 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.240+0000 7ff7862a1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:52:58.588 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.316+0000 7ff7862a1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:52:58.588 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.422+0000 7ff7862a1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:58.588 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.552+0000 7ff7862a1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:52:58.588 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:52:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:52:58.586+0000 7ff7862a1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: Activating manager daemon a 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: mgrmap e2: a(active, starting, since 0.00432882s) 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: Manager daemon a is now available 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' 2026-03-10T05:52:59.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:52:59 vm04 ceph-mon[50920]: from='mgr.14100 192.168.123.104:0/631788744' entity='mgr.a' 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "2a12cf18-1c45-11f1-9f2e-3f4ab8754027", 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T05:53:00.614 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T05:52:53:392948+0000", 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T05:52:53.393498+0000", 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:53:00.615 INFO:teuthology.orchestra.run.vm04.stdout:mgr is available 2026-03-10T05:53:00.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:00 vm04 ceph-mon[50920]: mgrmap e3: a(active, since 1.00808s) 2026-03-10T05:53:00.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:00 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/1356984426' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T05:53:00.971 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T05:53:00.972 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T05:53:00.972 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T05:53:00.972 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T05:53:00.972 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T05:53:00.972 INFO:teuthology.orchestra.run.vm04.stdout:Enabling cephadm module... 2026-03-10T05:53:01.852 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-mon[50920]: mgrmap e4: a(active, since 2s) 2026-03-10T05:53:01.852 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3294220149' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T05:53:01.852 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3294220149' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T05:53:01.852 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2465071639' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T05:53:02.122 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setuser ceph since I am not root 2026-03-10T05:53:02.122 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setgroup ceph since I am not root 2026-03-10T05:53:02.122 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:01.942+0000 7fe2d3161140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:53:02.122 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:01.982+0000 7fe2d3161140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-10T05:53:02.406 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 5... 2026-03-10T05:53:02.699 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:02.381+0000 7fe2d3161140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:02.697+0000 7fe2d3161140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: from numpy import show_config as show_numpy_config 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:02.777+0000 7fe2d3161140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:02.812+0000 7fe2d3161140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:02.884+0000 7fe2d3161140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2465071639' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-mon[50920]: mgrmap e5: a(active, since 3s) 2026-03-10T05:53:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:02 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/1798184878' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.366+0000 7fe2d3161140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.472+0000 7fe2d3161140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.509+0000 7fe2d3161140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.541+0000 7fe2d3161140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.579+0000 7fe2d3161140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:53:03.777 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.615+0000 7fe2d3161140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:53:04.028 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.775+0000 7fe2d3161140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:53:04.029 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:03 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:03.823+0000 7fe2d3161140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:04.287 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.026+0000 7fe2d3161140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:53:04.539 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.285+0000 7fe2d3161140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:53:04.539 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.319+0000 7fe2d3161140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:53:04.539 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.357+0000 7fe2d3161140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:53:04.539 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.429+0000 7fe2d3161140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:53:04.539 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.463+0000 7fe2d3161140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:53:04.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.537+0000 7fe2d3161140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:53:04.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.645+0000 7fe2d3161140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:04.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.772+0000 7fe2d3161140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:04.805+0000 7fe2d3161140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: Active manager daemon a restarted 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: Activating manager daemon a 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: mgrmap e6: a(active, starting, since 0.0142923s) 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: Manager daemon a is now available 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:05.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:04 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 5 is available 2026-03-10T05:53:05.959 INFO:teuthology.orchestra.run.vm04.stdout:Setting orchestrator backend to cephadm... 2026-03-10T05:53:06.227 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:05 vm04 ceph-mon[50920]: Found migration_current of "None". Setting to last migration. 2026-03-10T05:53:06.227 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:05 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:06.227 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:05 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:06.227 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:05 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:06.227 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:05 vm04 ceph-mon[50920]: mgrmap e7: a(active, since 1.02475s) 2026-03-10T05:53:06.723 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T05:53:06.723 INFO:teuthology.orchestra.run.vm04.stdout:Generating ssh key... 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-mon[50920]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-mon[50920]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Generating public/private ed25519 key pair. 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Your identification has been saved in /tmp/tmpm9gkhmjh/key 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Your public key has been saved in /tmp/tmpm9gkhmjh/key.pub 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: The key fingerprint is: 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: SHA256:y3Ml2tmCiuQj9GJbnk1x6i8r3+6a0mEYU57ZLmzm6tY ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: The key's randomart image is: 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: +--[ED25519 256]--+ 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | . | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | o + | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | o + . | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | =..S . . | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | . . B=.= = | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | . .oBoo* = . | 2026-03-10T05:53:07.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | +=**E+ o . | 2026-03-10T05:53:07.058 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: | ..*B*OB= | 2026-03-10T05:53:07.058 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:06 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: +----[SHA256]-----+ 2026-03-10T05:53:07.468 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:07.468 INFO:teuthology.orchestra.run.vm04.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T05:53:07.468 INFO:teuthology.orchestra.run.vm04.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T05:53:07.468 INFO:teuthology.orchestra.run.vm04.stdout:Adding host vm04... 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:05] ENGINE Bus STARTING 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:06] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:06] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:06] ENGINE Bus STARTED 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:06] ENGINE Client ('192.168.123.104', 53682) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:08.028 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:07 vm04 ceph-mon[50920]: mgrmap e8: a(active, since 2s) 2026-03-10T05:53:09.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:08 vm04 ceph-mon[50920]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:09.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:08 vm04 ceph-mon[50920]: Generating ssh key... 2026-03-10T05:53:09.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:08 vm04 ceph-mon[50920]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:09.216 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:08 vm04 ceph-mon[50920]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:09.352 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Added host 'vm04' with addr '192.168.123.104' 2026-03-10T05:53:09.352 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mon service... 2026-03-10T05:53:09.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T05:53:09.757 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mgr service... 2026-03-10T05:53:10.006 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:09 vm04 ceph-mon[50920]: Deploying cephadm binary to vm04 2026-03-10T05:53:10.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:09 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:10.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:09 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:10.007 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:09 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:10.158 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T05:53:10.960 INFO:teuthology.orchestra.run.vm04.stdout:Enabling the dashboard module... 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: Added host vm04 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: Saving service mon spec with placement count:5 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2600446811' entity='client.admin' 2026-03-10T05:53:11.067 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:10 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2576521706' entity='client.admin' 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: Saving service mgr spec with placement count:2 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3752074746' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T05:53:12.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:11 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:12.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setuser ceph since I am not root 2026-03-10T05:53:12.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setgroup ceph since I am not root 2026-03-10T05:53:12.391 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:12.358+0000 7fbbdca5a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:53:12.657 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:12.403+0000 7fbbdca5a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-10T05:53:12.796 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 9... 2026-03-10T05:53:12.913 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:12.810+0000 7fbbdca5a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-mon[50920]: from='mgr.14118 192.168.123.104:0/702186223' entity='mgr.a' 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3752074746' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-mon[50920]: mgrmap e9: a(active, since 7s) 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:12 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/1244314979' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.137+0000 7fbbdca5a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:53:13.218 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:53:13.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:53:13.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: from numpy import show_config as show_numpy_config 2026-03-10T05:53:13.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.221+0000 7fbbdca5a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:53:13.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.259+0000 7fbbdca5a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:53:13.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.330+0000 7fbbdca5a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.801+0000 7fbbdca5a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.906+0000 7fbbdca5a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.943+0000 7fbbdca5a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:13 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:13.975+0000 7fbbdca5a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.013+0000 7fbbdca5a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:53:14.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.047+0000 7fbbdca5a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:53:14.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.206+0000 7fbbdca5a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:53:14.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.253+0000 7fbbdca5a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:14.557 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.453+0000 7fbbdca5a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.712+0000 7fbbdca5a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.746+0000 7fbbdca5a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.783+0000 7fbbdca5a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.856+0000 7fbbdca5a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.890+0000 7fbbdca5a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:53:15.057 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:14 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:14.961+0000 7fbbdca5a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:53:15.325 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:15.065+0000 7fbbdca5a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:15.325 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:15.192+0000 7fbbdca5a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:53:15.325 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:15.226+0000 7fbbdca5a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: Active manager daemon a restarted 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: Activating manager daemon a 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: mgrmap e10: a(active, starting, since 0.0057129s) 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: Manager daemon a is now available 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:15.326 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 9 is available 2026-03-10T05:53:16.429 INFO:teuthology.orchestra.run.vm04.stdout:Generating a dashboard self-signed certificate... 2026-03-10T05:53:16.866 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T05:53:16.866 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial admin user... 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:16] ENGINE Bus STARTING 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: mgrmap e11: a(active, since 1.04769s) 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:16] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T05:53:17.258 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T05:53:17.259 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:17.259 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$MkLxpYP4w5XYeROCiAi2u.6V8zp.MUcZp2dCBj8BJhhRprra1ulXq", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773121997, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T05:53:17.386 INFO:teuthology.orchestra.run.vm04.stdout:Fetching dashboard port number... 2026-03-10T05:53:17.752 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T05:53:17.752 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T05:53:17.753 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout:Ceph Dashboard is now available at: 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout: URL: https://vm04.local:8443/ 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout: User: admin 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout: Password: 0djwb3c2it 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:17.754 INFO:teuthology.orchestra.run.vm04.stdout:Saving cluster configuration to /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config directory 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: ceph telemetry on 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:For more information see: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:18.504 INFO:teuthology.orchestra.run.vm04.stdout:Bootstrap complete. 2026-03-10T05:53:18.538 INFO:tasks.cephadm:Fetching config... 2026-03-10T05:53:18.538 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:18.538 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T05:53:18.560 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T05:53:18.560 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:18.560 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T05:53:18.618 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:16] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:16] ENGINE Bus STARTED 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: [10/Mar/2026:05:53:16] ENGINE Client ('192.168.123.104', 55798) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3186100537' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: mgrmap e12: a(active, since 2s) 2026-03-10T05:53:18.619 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:18 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3782867788' entity='client.admin' 2026-03-10T05:53:18.623 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T05:53:18.623 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:18.623 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/keyring of=/dev/stdout 2026-03-10T05:53:18.691 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T05:53:18.691 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:18.691 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T05:53:18.752 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T05:53:18.752 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T05:53:18.829 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:18.844 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T05:53:18.875 INFO:teuthology.orchestra.run.vm06.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:18.883 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T05:53:18.915 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAX9ztxh3yIq9qA0LEzXnny6OJTw3zdHi+V1r10bNfPG ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:18.924 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T05:53:19.148 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:19.577 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T05:53:19.578 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T05:53:19.773 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:20.227 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm06 2026-03-10T05:53:20.227 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:53:20.227 DEBUG:teuthology.orchestra.run.vm06:> dd of=/etc/ceph/ceph.conf 2026-03-10T05:53:20.242 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:53:20.242 DEBUG:teuthology.orchestra.run.vm06:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:20.300 INFO:tasks.cephadm:Adding host vm06 to orchestrator... 2026-03-10T05:53:20.300 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch host add vm06 2026-03-10T05:53:20.428 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:20 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3824462814' entity='client.admin' 2026-03-10T05:53:20.428 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:20 vm04 ceph-mon[50920]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.428 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:20 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:20.509 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: Deploying cephadm binary to vm06 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:22.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.264 INFO:teuthology.orchestra.run.vm04.stdout:Added host 'vm06' with addr '192.168.123.106' 2026-03-10T05:53:22.417 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch host ls --format=json 2026-03-10T05:53:22.594 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:22.827 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:22.827 INFO:teuthology.orchestra.run.vm04.stdout:[{"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.106", "hostname": "vm06", "labels": [], "status": ""}] 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: mgrmap e13: a(active, since 6s) 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: Added host vm06 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.924 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:22.970 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T05:53:22.970 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:53:22.970 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T05:53:22.985 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:53:22.985 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:23.041 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T05:53:23.041 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch host add vm08 2026-03-10T05:53:23.205 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:24.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:23 vm04 ceph-mon[50920]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:53:24.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:24.946 INFO:teuthology.orchestra.run.vm04.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T05:53:25.101 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch host ls --format=json 2026-03-10T05:53:25.285 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:25.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:24 vm04 ceph-mon[50920]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:25.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:24 vm04 ceph-mon[50920]: Deploying cephadm binary to vm08 2026-03-10T05:53:25.530 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:25.530 INFO:teuthology.orchestra.run.vm04.stdout:[{"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.106", "hostname": "vm06", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T05:53:25.676 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T05:53:25.676 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd crush tunables default 2026-03-10T05:53:25.840 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: Added host vm08 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:25 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:26.950 INFO:teuthology.orchestra.run.vm04.stderr:adjusted tunables profile to default 2026-03-10T05:53:27.109 INFO:tasks.cephadm:Adding mon.a on vm04 2026-03-10T05:53:27.109 INFO:tasks.cephadm:Adding mon.b on vm06 2026-03-10T05:53:27.109 INFO:tasks.cephadm:Adding mon.c on vm08 2026-03-10T05:53:27.109 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch apply mon '3;vm04:192.168.123.104=a;vm06:192.168.123.106=b;vm08:192.168.123.108=c' 2026-03-10T05:53:27.276 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T05:53:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:26 vm04 ceph-mon[50920]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:53:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:26 vm04 ceph-mon[50920]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:53:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:26 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2151212336' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T05:53:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:26 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:27.313 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T05:53:27.571 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-10T05:53:27.749 DEBUG:teuthology.orchestra.run.vm06:mon.b> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.b.service 2026-03-10T05:53:27.750 DEBUG:teuthology.orchestra.run.vm08:mon.c> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.c.service 2026-03-10T05:53:27.754 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:53:27.754 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph mon dump -f json 2026-03-10T05:53:27.971 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T05:53:28.008 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T05:53:28.287 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:53:28.288 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","modified":"2026-03-10T05:52:52.167191Z","created":"2026-03-10T05:52:52.167191Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T05:53:28.288 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T05:53:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2151212336' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T05:53:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:28.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:27 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm06:192.168.123.106=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: Saving service mon spec with placement vm04:192.168.123.104=a;vm06:192.168.123.106=b;vm08:192.168.123.108=c;count:3 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='client.? 192.168.123.108:0/2451423072' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:29.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:29.447 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:53:29.448 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph mon dump -f json 2026-03-10T05:53:29.710 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.c/config 2026-03-10T05:53:30.020 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:53:30.020 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","modified":"2026-03-10T05:52:52.167191Z","created":"2026-03-10T05:52:52.167191Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T05:53:30.020 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T05:53:30.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:30 vm04 ceph-mon[50920]: Deploying daemon mon.c on vm08 2026-03-10T05:53:31.203 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:53:31.204 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph mon dump -f json 2026-03-10T05:53:31.367 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.c/config 2026-03-10T05:53:31.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:31 vm06 ceph-mon[56706]: mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T05:53:35.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:35 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: Deploying daemon mon.b on vm06 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: mon.a calling monitor election 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: mon.c calling monitor election 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:35.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: monmap epoch 2 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: last_changed 2026-03-10T05:53:30.135007+0000 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: min_mon_release 19 (squid) 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: election_strategy: 1 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: fsmap 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: mgrmap e13: a(active, since 19s) 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: overall HEALTH_OK 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:35.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:35 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:36.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:36 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:36.133+0000 7fbba8dc1640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T05:53:41.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: mon.a calling monitor election 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: mon.c calling monitor election 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: monmap epoch 3 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: min_mon_release 19 (squid) 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: election_strategy: 1 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: fsmap 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: mgrmap e13: a(active, since 25s) 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: overall HEALTH_OK 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: mon.a calling monitor election 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: mon.c calling monitor election 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: monmap epoch 3 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: min_mon_release 19 (squid) 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: election_strategy: 1 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: fsmap 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: mgrmap e13: a(active, since 25s) 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: overall HEALTH_OK 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:41.248 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:53:41.248 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":3,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","modified":"2026-03-10T05:53:35.724486Z","created":"2026-03-10T05:52:52.167191Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T05:53:41.248 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 3 2026-03-10T05:53:41.418 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T05:53:41.418 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph config generate-minimal-conf 2026-03-10T05:53:41.605 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:41.829 INFO:teuthology.orchestra.run.vm04.stdout:# minimal ceph.conf for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:41.829 INFO:teuthology.orchestra.run.vm04.stdout:[global] 2026-03-10T05:53:41.829 INFO:teuthology.orchestra.run.vm04.stdout: fsid = 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:41.829 INFO:teuthology.orchestra.run.vm04.stdout: mon_host = [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T05:53:41.995 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T05:53:41.995 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:41.995 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T05:53:42.020 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:42.020 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:42.086 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:53:42.086 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T05:53:42.111 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:53:42.111 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:42.174 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:53:42.175 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T05:53:42.199 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:53:42.200 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.266 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: Reconfiguring daemon mon.a on vm04 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='client.? 192.168.123.108:0/1738048434' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3217165599' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.267 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:41 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.269 INFO:tasks.cephadm:Adding mgr.a on vm04 2026-03-10T05:53:42.269 INFO:tasks.cephadm:Adding mgr.b on vm06 2026-03-10T05:53:42.269 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch apply mgr '2;vm04=a;vm06=b' 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: Reconfiguring daemon mon.a on vm04 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='client.? 192.168.123.108:0/1738048434' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3217165599' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:41 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.468 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.c/config 2026-03-10T05:53:42.713 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-10T05:53:42.870 DEBUG:teuthology.orchestra.run.vm06:mgr.b> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.b.service 2026-03-10T05:53:42.872 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T05:53:42.872 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T05:53:42.872 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T05:53:42.887 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:53:42.887 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-10T05:53:42.944 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-10T05:53:42.944 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-10T05:53:42.944 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-10T05:53:42.944 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-10T05:53:42.944 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-10T05:53:42.944 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T05:53:42.944 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T05:53:42.944 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Deploying daemon mon.b on vm06 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.a calling monitor election 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.c calling monitor election 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: monmap epoch 2 2026-03-10T05:53:42.981 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: last_changed 2026-03-10T05:53:30.135007+0000 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: min_mon_release 19 (squid) 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: election_strategy: 1 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: fsmap 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mgrmap e13: a(active, since 19s) 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: overall HEALTH_OK 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.a calling monitor election 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.c calling monitor election 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: monmap epoch 3 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: min_mon_release 19 (squid) 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: election_strategy: 1 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: fsmap 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: mgrmap e13: a(active, since 25s) 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: overall HEALTH_OK 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.982 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: Reconfiguring daemon mon.a on vm04 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='client.? 192.168.123.108:0/1738048434' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3217165599' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:42.983 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:42 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,10 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 05:53:20.613402286 +0000 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 05:50:02.260296557 +0000 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 05:50:02.260296557 +0000 2026-03-10T05:53:43.002 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-10 05:46:58.246000000 +0000 2026-03-10T05:53:43.002 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T05:53:43.063 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T05:53:43.063 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T05:53:43.064 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.00015009 s, 3.4 MB/s 2026-03-10T05:53:43.064 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T05:53:43.120 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-10T05:53:43.177 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-10T05:53:43.177 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:43.177 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 247 Links: 1 Device type: fc,20 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 05:53:20.656402325 +0000 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 05:50:02.265296562 +0000 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 05:50:02.265296562 +0000 2026-03-10T05:53:43.178 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-10 05:46:58.258000000 +0000 2026-03-10T05:53:43.178 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T05:53:43.242 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T05:53:43.242 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T05:53:43.242 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.00017151 s, 3.0 MB/s 2026-03-10T05:53:43.243 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T05:53:43.301 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 05:53:20.691402356 +0000 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 05:50:02.259296556 +0000 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 05:50:02.259296556 +0000 2026-03-10T05:53:43.360 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-10 05:46:58.270000000 +0000 2026-03-10T05:53:43.360 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T05:53:43.425 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T05:53:43.425 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T05:53:43.425 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.00017087 s, 3.0 MB/s 2026-03-10T05:53:43.426 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T05:53:43.488 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-10T05:53:43.546 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-10T05:53:43.546 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:43.546 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T05:53:43.546 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:43.546 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:43.547 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 05:53:20.725402387 +0000 2026-03-10T05:53:43.547 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 05:50:02.248296545 +0000 2026-03-10T05:53:43.547 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 05:50:02.248296545 +0000 2026-03-10T05:53:43.547 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-10 05:46:58.280000000 +0000 2026-03-10T05:53:43.547 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T05:53:43.580 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 podman[57690]: 2026-03-10 05:53:43.549399138 +0000 UTC m=+0.016972696 container create c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-10T05:53:43.611 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T05:53:43.611 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T05:53:43.611 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000182873 s, 2.8 MB/s 2026-03-10T05:53:43.612 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T05:53:43.675 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T05:53:43.675 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T05:53:43.700 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:53:43.701 DEBUG:teuthology.orchestra.run.vm06:> ls /dev/[sv]d? 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mon.b calling monitor election 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: Saving service mgr spec with placement vm04=a;vm06=b;count:2 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: Deploying daemon mgr.b on vm06 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mon.a calling monitor election 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mon.b calling monitor election 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mon.c calling monitor election 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: monmap epoch 3 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: min_mon_release 19 (squid) 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: election_strategy: 1 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: fsmap 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: mgrmap e13: a(active, since 27s) 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: overall HEALTH_OK 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.725 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:43.845 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vda 2026-03-10T05:53:43.845 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdb 2026-03-10T05:53:43.845 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdc 2026-03-10T05:53:43.845 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdd 2026-03-10T05:53:43.845 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vde 2026-03-10T05:53:43.845 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T05:53:43.845 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T05:53:43.845 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdb 2026-03-10T05:53:43.870 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdb 2026-03-10T05:53:43.870 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:43.870 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T05:53:43.870 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:43.871 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:43.871 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 05:53:24.744900790 +0000 2026-03-10T05:53:43.871 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 05:50:02.815288955 +0000 2026-03-10T05:53:43.871 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 05:50:02.815288955 +0000 2026-03-10T05:53:43.871 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 05:46:33.236000000 +0000 2026-03-10T05:53:43.871 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mon.b calling monitor election 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: Saving service mgr spec with placement vm04=a;vm06=b;count:2 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: Deploying daemon mgr.b on vm06 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mon.a calling monitor election 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mon.b calling monitor election 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mon.c calling monitor election 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: monmap epoch 3 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: min_mon_release 19 (squid) 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: election_strategy: 1 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: fsmap 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: mgrmap e13: a(active, since 27s) 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: overall HEALTH_OK 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 podman[57690]: 2026-03-10 05:53:43.59597777 +0000 UTC m=+0.063551338 container init c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 podman[57690]: 2026-03-10 05:53:43.602525697 +0000 UTC m=+0.070099255 container start c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 bash[57690]: c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 podman[57690]: 2026-03-10 05:53:43.542836163 +0000 UTC m=+0.010409731 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 systemd[1]: Started Ceph mgr.b for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027. 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:43.704+0000 7f298e1e9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:53:43.890 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:43 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:43.755+0000 7f298e1e9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:53:43.965 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T05:53:43.965 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T05:53:43.965 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000404888 s, 1.3 MB/s 2026-03-10T05:53:43.966 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T05:53:44.014 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdc 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mon.b calling monitor election 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: Saving service mgr spec with placement vm04=a;vm06=b;count:2 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: Deploying daemon mgr.b on vm06 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mon.a calling monitor election 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mon.b calling monitor election 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mon.c calling monitor election 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: monmap epoch 3 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: min_mon_release 19 (squid) 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: election_strategy: 1 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: fsmap 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: mgrmap e13: a(active, since 27s) 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: overall HEALTH_OK 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdc 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 05:53:24.778900780 +0000 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 05:50:02.820288962 +0000 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 05:50:02.820288962 +0000 2026-03-10T05:53:44.063 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 05:46:33.240000000 +0000 2026-03-10T05:53:44.063 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T05:53:44.111 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T05:53:44.111 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T05:53:44.112 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000278941 s, 1.8 MB/s 2026-03-10T05:53:44.112 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T05:53:44.147 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdd 2026-03-10T05:53:44.209 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:44.204+0000 7f298e1e9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdd 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 05:53:24.805900772 +0000 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 05:50:02.819288961 +0000 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 05:50:02.819288961 +0000 2026-03-10T05:53:44.220 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 05:46:33.267000000 +0000 2026-03-10T05:53:44.221 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T05:53:44.309 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T05:53:44.309 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T05:53:44.309 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.00012839 s, 4.0 MB/s 2026-03-10T05:53:44.310 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T05:53:44.329 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vde 2026-03-10T05:53:44.390 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vde 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 05:53:24.840900761 +0000 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 05:50:02.832288978 +0000 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 05:50:02.832288978 +0000 2026-03-10T05:53:44.391 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 05:46:33.311000000 +0000 2026-03-10T05:53:44.391 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T05:53:44.461 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T05:53:44.461 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T05:53:44.461 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000162644 s, 3.1 MB/s 2026-03-10T05:53:44.462 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T05:53:44.522 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T05:53:44.522 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T05:53:44.540 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:53:44.540 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T05:53:44.601 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T05:53:44.601 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T05:53:44.602 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T05:53:44.602 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T05:53:44.602 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T05:53:44.602 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T05:53:44.602 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T05:53:44.602 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 251 Links: 1 Device type: fc,10 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 05:53:27.511336529 +0000 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 05:50:01.583875594 +0000 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 05:50:01.583875594 +0000 2026-03-10T05:53:44.661 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 05:46:08.223000000 +0000 2026-03-10T05:53:44.661 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:44.536+0000 7f298e1e9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: from numpy import show_config as show_numpy_config 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:44.623+0000 7f298e1e9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:53:44.728 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:44.661+0000 7f298e1e9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:53:44.729 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T05:53:44.729 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T05:53:44.729 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000147646 s, 3.5 MB/s 2026-03-10T05:53:44.730 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T05:53:44.790 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:53:44 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:53:44.728+0000 7fbba8dc1640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 05:53:27.539336539 +0000 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 05:50:01.669875709 +0000 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 05:50:01.669875709 +0000 2026-03-10T05:53:44.851 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 05:46:08.234000000 +0000 2026-03-10T05:53:44.851 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T05:53:44.917 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T05:53:44.918 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T05:53:44.918 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000151894 s, 3.4 MB/s 2026-03-10T05:53:44.919 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T05:53:44.978 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 05:53:27.567336549 +0000 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 05:50:01.578875588 +0000 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 05:50:01.578875588 +0000 2026-03-10T05:53:45.037 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 05:46:08.241000000 +0000 2026-03-10T05:53:45.037 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T05:53:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:45.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.079 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T05:53:45.079 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T05:53:45.079 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000164728 s, 3.1 MB/s 2026-03-10T05:53:45.080 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T05:53:45.136 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T05:53:45.138 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:44 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:44.736+0000 7f298e1e9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:53:45.195 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T05:53:45.195 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 05:53:27.605336562 +0000 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 05:50:01.582875593 +0000 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 05:50:01.582875593 +0000 2026-03-10T05:53:45.196 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 05:46:08.293000000 +0000 2026-03-10T05:53:45.196 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T05:53:45.258 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T05:53:45.259 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T05:53:45.259 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000233497 s, 2.2 MB/s 2026-03-10T05:53:45.259 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T05:53:45.317 INFO:tasks.cephadm:Deploying osd.0 on vm04 with /dev/vde... 2026-03-10T05:53:45.317 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- lvm zap /dev/vde 2026-03-10T05:53:45.485 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:45.497 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.235+0000 7f298e1e9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:53:45.497 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.344+0000 7f298e1e9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:45.497 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.382+0000 7f298e1e9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:53:45.497 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.417+0000 7f298e1e9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:45.497 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.458+0000 7f298e1e9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:53:45.762 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.495+0000 7f298e1e9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:53:45.762 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.662+0000 7f298e1e9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:53:45.762 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.712+0000 7f298e1e9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: Reconfiguring daemon mgr.a on vm04 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:45.983 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:45 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: Reconfiguring daemon mgr.a on vm04 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:46.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:45 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:45.935+0000 7f298e1e9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: Reconfiguring daemon mgr.a on vm04 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:46.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:45 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:46.456 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.201+0000 7f298e1e9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:53:46.456 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.236+0000 7f298e1e9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:53:46.456 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.274+0000 7f298e1e9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:53:46.456 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.347+0000 7f298e1e9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:53:46.456 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.382+0000 7f298e1e9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:53:46.479 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:53:46.496 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-10T05:53:46.655 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:53:46.726 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.454+0000 7f298e1e9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:53:46.726 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.560+0000 7f298e1e9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:46.726 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.689+0000 7f298e1e9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:53:47.122 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:53:46 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:53:46.724+0000 7f298e1e9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: Standby manager daemon b started 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:47.332 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:47 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: Standby manager daemon b started 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:47 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:47.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: Standby manager daemon b started 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/4104583995' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:47.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:47 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: mgrmap e14: a(active, since 31s), standbys: b 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2680722229' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]': finished 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:53:48.133 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: mgrmap e14: a(active, since 31s), standbys: b 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/2680722229' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]': finished 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:53:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: mgrmap e14: a(active, since 31s), standbys: b 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/2680722229' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]: dispatch 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "08df3d5e-0cfd-417f-8237-9b6edd4c9520"}]': finished 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:53:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:49.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:49 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/2194809336' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:49.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:49 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/2194809336' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:49.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:49 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2194809336' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:50.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:50 vm08 ceph-mon[53504]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:50.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:50 vm04 ceph-mon[50920]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:50.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:50 vm06 ceph-mon[56706]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:52.372 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:52 vm04 ceph-mon[50920]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:52.372 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:52 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:52.372 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:52 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:52.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:52 vm08 ceph-mon[53504]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:52.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:52 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:52.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:52 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:52.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:52 vm06 ceph-mon[56706]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:52.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:52 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:52.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:52 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:53.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:53 vm06 ceph-mon[56706]: Deploying daemon osd.0 on vm04 2026-03-10T05:53:53.645 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:53 vm04 ceph-mon[50920]: Deploying daemon osd.0 on vm04 2026-03-10T05:53:53.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:53 vm08 ceph-mon[53504]: Deploying daemon osd.0 on vm04 2026-03-10T05:53:54.545 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:54 vm04 ceph-mon[50920]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:54.545 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:54 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:54.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:54 vm06 ceph-mon[56706]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:54.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:54 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:54.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:54 vm08 ceph-mon[53504]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:54.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:54 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.396 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:55 vm04 ceph-mon[50920]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:55.562 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 0 on host 'vm04' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:55 vm06 ceph-mon[56706]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:55.725 DEBUG:teuthology.orchestra.run.vm04:osd.0> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.0.service 2026-03-10T05:53:55.727 INFO:tasks.cephadm:Deploying osd.1 on vm06 with /dev/vde... 2026-03-10T05:53:55.727 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- lvm zap /dev/vde 2026-03-10T05:53:55.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:55 vm08 ceph-mon[53504]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:55.895 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.b/config 2026-03-10T05:53:56.307 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:53:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T05:53:55.968+0000 7efeda041740 -1 osd.0 0 log_to_monitors true 2026-03-10T05:53:56.315 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:56 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:56.315 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:56 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.570 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:56 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:56.570 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:56 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.570 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:56 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.570 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:56 vm04 ceph-mon[50920]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:53:56.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:56 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:56 vm06 ceph-mon[56706]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:53:56.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:56 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:56.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:56 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:56 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:56.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:56 vm08 ceph-mon[53504]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:53:56.902 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T05:53:56.917 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch daemon add osd vm06:/dev/vde 2026-03-10T05:53:57.084 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.b/config 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: Detected new or changed devices on vm04 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:57.757 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:57.758 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:57.758 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:57 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:58.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: Detected new or changed devices on vm04 2026-03-10T05:53:58.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:57 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: Detected new or changed devices on vm04 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:53:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:57 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='client.24124 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='client.? 192.168.123.106:0/56554254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]': finished 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='client.? 192.168.123.106:0/918335157' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:58.862 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:58 vm06 ceph-mon[56706]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' 2026-03-10T05:53:59.057 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:53:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T05:53:58.753+0000 7efed5fc2640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T05:53:59.057 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:53:58 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T05:53:58.760+0000 7efed15eb640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='client.24124 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='client.? 192.168.123.106:0/56554254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]': finished 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='client.? 192.168.123.106:0/918335157' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:59.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:58 vm04 ceph-mon[50920]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' 2026-03-10T05:53:59.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='client.24124 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='client.? 192.168.123.106:0/56554254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9e1d4c46-2510-4f16-8459-2bdfc6731f12"}]': finished 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='client.? 192.168.123.106:0/918335157' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:53:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:58 vm08 ceph-mon[53504]: from='osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812]' entity='osd.0' 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: purged_snaps scrub starts 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: purged_snaps scrub ok 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812] boot 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:53:59 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:00.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: purged_snaps scrub starts 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: purged_snaps scrub ok 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812] boot 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:53:59 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: purged_snaps scrub starts 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: purged_snaps scrub ok 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: osd.0 [v2:192.168.123.104:6802/3068485812,v1:192.168.123.104:6803/3068485812] boot 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:54:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:00.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:53:59 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:02.136 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:01 vm06 ceph-mon[56706]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:54:02.136 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:01 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:02.136 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:01 vm06 ceph-mon[56706]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:02.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:01 vm08 ceph-mon[53504]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:54:02.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:01 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:02.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:01 vm08 ceph-mon[53504]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:01 vm04 ceph-mon[50920]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:54:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:01 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:01 vm04 ceph-mon[50920]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:02.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:02 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:02.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:02 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:02 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:02 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:02 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:02 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:04.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:03 vm06 ceph-mon[56706]: Deploying daemon osd.1 on vm06 2026-03-10T05:54:04.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:03 vm06 ceph-mon[56706]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:04.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:03 vm08 ceph-mon[53504]: Deploying daemon osd.1 on vm06 2026-03-10T05:54:04.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:03 vm08 ceph-mon[53504]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:04.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:03 vm04 ceph-mon[50920]: Deploying daemon osd.1 on vm06 2026-03-10T05:54:04.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:03 vm04 ceph-mon[50920]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:05.098 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:04 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:05.099 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:04 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:05.099 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:04 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:05.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:04 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:05.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:04 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:05.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:04 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:05.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:04 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:05.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:04 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:05.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:04 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.082 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 1 on host 'vm06' 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:06.217 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:05 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.254 DEBUG:teuthology.orchestra.run.vm06:osd.1> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.1.service 2026-03-10T05:54:06.255 INFO:tasks.cephadm:Deploying osd.2 on vm08 with /dev/vde... 2026-03-10T05:54:06.255 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- lvm zap /dev/vde 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:06.277 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:05 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: pgmap v25: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:06.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:06.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:06.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:05 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:06.416 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.c/config 2026-03-10T05:54:07.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:06 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:07.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:06 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:06 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:06 vm08 ceph-mon[53504]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:07.154 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:06 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:07.154 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:06 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.154 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:06 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.154 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:06 vm06 ceph-mon[56706]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:07.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:06 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:07.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:06 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:06 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:07.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:06 vm04 ceph-mon[50920]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:07.465 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T05:54:07.481 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph orch daemon add osd vm08:/dev/vde 2026-03-10T05:54:07.679 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.c/config 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: Detected new or changed devices on vm06 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: pgmap v27: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:08.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:54:08.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:54:08.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:08 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:08.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:08.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: Detected new or changed devices on vm06 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: pgmap v27: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:54:08.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:08 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: Detected new or changed devices on vm06 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: pgmap v27: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:54:08.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:08 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='client.24151 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='client.? 192.168.123.108:0/852332358' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]': finished 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: osdmap e13: 3 total, 1 up, 3 in 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:09 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:08 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1[60612]: 2026-03-10T05:54:08.958+0000 7f5038752640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T05:54:09.388 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:08 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1[60612]: 2026-03-10T05:54:08.964+0000 7f5033d7b640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='client.24151 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='client.? 192.168.123.108:0/852332358' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]': finished 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: osdmap e13: 3 total, 1 up, 3 in 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' 2026-03-10T05:54:09.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:09 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='client.24151 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='client.? 192.168.123.108:0/852332358' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fa5afa75-44db-4f6b-9c47-cdbdb9647e87"}]': finished 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: osdmap e13: 3 total, 1 up, 3 in 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690]' entity='osd.1' 2026-03-10T05:54:09.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:09 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: purged_snaps scrub starts 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: purged_snaps scrub ok 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: pgmap v30: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: from='client.? 192.168.123.108:0/2016390350' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690] boot 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:10 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: purged_snaps scrub starts 2026-03-10T05:54:10.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: purged_snaps scrub ok 2026-03-10T05:54:10.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: pgmap v30: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: from='client.? 192.168.123.108:0/2016390350' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:54:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690] boot 2026-03-10T05:54:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:54:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:10 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: purged_snaps scrub starts 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: purged_snaps scrub ok 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: pgmap v30: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: from='client.? 192.168.123.108:0/2016390350' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: osd.1 [v2:192.168.123.106:6800/3343442690,v1:192.168.123.106:6801/3343442690] boot 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:10 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:12.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:12 vm08 ceph-mon[53504]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:54:12.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:12 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:12.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:12 vm08 ceph-mon[53504]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:12 vm06 ceph-mon[56706]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:54:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:12 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:12 vm06 ceph-mon[56706]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:12 vm04 ceph-mon[50920]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:54:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:12 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:12 vm04 ceph-mon[50920]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:13.162 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:13 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:13.162 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:13 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:13.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:13 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:13.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:13 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:13.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:13 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:13.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:13 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:14.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:14 vm06 ceph-mon[56706]: Deploying daemon osd.2 on vm08 2026-03-10T05:54:14.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:14 vm06 ceph-mon[56706]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:14.523 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:14 vm08 ceph-mon[53504]: Deploying daemon osd.2 on vm08 2026-03-10T05:54:14.523 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:14 vm08 ceph-mon[53504]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:14.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:14 vm04 ceph-mon[50920]: Deploying daemon osd.2 on vm08 2026-03-10T05:54:14.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:14 vm04 ceph-mon[50920]: pgmap v34: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:15.383 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:15 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:15.383 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:15 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:15.383 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:15 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:15.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:15.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:15.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:15 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:15.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:15 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:15.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:15 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:15.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:15 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.413 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 2 on host 'vm08' 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:16.542 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:16 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:16.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:16.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:16.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:16 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.578 DEBUG:teuthology.orchestra.run.vm08:osd.2> sudo journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.2.service 2026-03-10T05:54:16.579 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T05:54:16.579 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd stat -f json 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:16.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:16 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:16.777 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:17.021 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:17.172 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1773122049,"num_in_osds":3,"osd_in_since":1773122048,"num_remapped_pgs":0} 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.216 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:17 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3775751353' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:17.216 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:16 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T05:54:16.815+0000 7fb51e981740 -1 osd.2 0 log_to_monitors true 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:17 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3775751353' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:17.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:17 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3775751353' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:18.173 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd stat -f json 2026-03-10T05:54:18.346 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:18.462 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:18.462 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:18.462 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:54:18.462 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.463 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:18 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:18 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.554 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:18 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T05:54:18.415+0000 7fb51b115640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T05:54:18.554 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:18 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T05:54:18.419+0000 7fb51672c640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:54:18.589 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: pgmap v36: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:18 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:18.757 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":17,"num_osds":3,"num_up_osds":2,"osd_up_since":1773122049,"num_in_osds":3,"osd_in_since":1773122048,"num_remapped_pgs":0} 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: Detected new or changed devices on vm08 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: osdmap e17: 3 total, 2 up, 3 in 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:19 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/1792342952' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:19.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: Detected new or changed devices on vm08 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: osdmap e17: 3 total, 2 up, 3 in 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:19 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/1792342952' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: Detected new or changed devices on vm08 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: Adjusting osd_memory_target on vm08 to 4353M 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: osdmap e17: 3 total, 2 up, 3 in 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:19.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:19 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/1792342952' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:19.758 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd stat -f json 2026-03-10T05:54:19.931 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:20.163 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: purged_snaps scrub starts 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: purged_snaps scrub ok 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352] boot 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:20 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/2717141697' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:20.341 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1773122059,"num_in_osds":3,"osd_in_since":1773122048,"num_remapped_pgs":0} 2026-03-10T05:54:20.341 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd dump --format=json 2026-03-10T05:54:20.523 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: purged_snaps scrub starts 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: purged_snaps scrub ok 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352] boot 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:20.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:20 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/2717141697' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: purged_snaps scrub starts 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: purged_snaps scrub ok 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: pgmap v39: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: osd.2 [v2:192.168.123.108:6800/3964953352,v1:192.168.123.108:6801/3964953352] boot 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:20.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:20 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/2717141697' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:54:20.774 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:20.774 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":19,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","created":"2026-03-10T05:52:53.393264+0000","modified":"2026-03-10T05:54:20.415702+0000","last_up_change":"2026-03-10T05:54:19.413219+0000","last_in_change":"2026-03-10T05:54:08.797192+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"08df3d5e-0cfd-417f-8237-9b6edd4c9520","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6803","nonce":3068485812}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6805","nonce":3068485812}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6809","nonce":3068485812}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6807","nonce":3068485812}]},"public_addr":"192.168.123.104:6803/3068485812","cluster_addr":"192.168.123.104:6805/3068485812","heartbeat_back_addr":"192.168.123.104:6809/3068485812","heartbeat_front_addr":"192.168.123.104:6807/3068485812","state":["exists","up"]},{"osd":1,"uuid":"9e1d4c46-2510-4f16-8459-2bdfc6731f12","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6801","nonce":3343442690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6803","nonce":3343442690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6807","nonce":3343442690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6805","nonce":3343442690}]},"public_addr":"192.168.123.106:6801/3343442690","cluster_addr":"192.168.123.106:6803/3343442690","heartbeat_back_addr":"192.168.123.106:6807/3343442690","heartbeat_front_addr":"192.168.123.106:6805/3343442690","state":["exists","up"]},{"osd":2,"uuid":"fa5afa75-44db-4f6b-9c47-cdbdb9647e87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6801","nonce":3964953352}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6803","nonce":3964953352}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6807","nonce":3964953352}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6805","nonce":3964953352}]},"public_addr":"192.168.123.108:6801/3964953352","cluster_addr":"192.168.123.108:6803/3964953352","heartbeat_back_addr":"192.168.123.108:6807/3964953352","heartbeat_front_addr":"192.168.123.108:6805/3964953352","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:53:57.020677+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:07.252490+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:17.820753+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/1671790581":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/2968902969":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6801/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6800/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/1123864557":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6801/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6800/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/852751347":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/937056356":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/2550988868":"2026-03-11T05:53:04.807386+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:54:20.927 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T05:54:20.927 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T05:54:20.927 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T05:54:20.927 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T05:54:20.928 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph mgr dump --format=json 2026-03-10T05:54:21.090 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:21.340 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:21.428 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:21 vm04 ceph-mon[50920]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:54:21.429 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:21 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/4176138059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.429 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:21 vm04 ceph-mon[50920]: pgmap v42: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:21.429 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:21 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:21.429 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:21 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3510300033' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.493 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":4083188896},{"type":"v1","addr":"192.168.123.104:6801","nonce":4083188896}]},"active_addr":"192.168.123.104:6801/4083188896","active_change":"2026-03-10T05:53:15.228643+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14211,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.104:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":109201308}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":3382309317}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":2960886822}]}]} 2026-03-10T05:54:21.494 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T05:54:21.495 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T05:54:21.495 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd dump --format=json 2026-03-10T05:54:21.669 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:21.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:21 vm08 ceph-mon[53504]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:54:21.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:21 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/4176138059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:21 vm08 ceph-mon[53504]: pgmap v42: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:21.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:21 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:21.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:21 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3510300033' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:21 vm06 ceph-mon[56706]: osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:54:21.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:21 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/4176138059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:21 vm06 ceph-mon[56706]: pgmap v42: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:21.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:21 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:21.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:21 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3510300033' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:54:21.903 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:21.903 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":20,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","created":"2026-03-10T05:52:53.393264+0000","modified":"2026-03-10T05:54:21.419383+0000","last_up_change":"2026-03-10T05:54:19.413219+0000","last_in_change":"2026-03-10T05:54:08.797192+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T05:54:21.275045+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"08df3d5e-0cfd-417f-8237-9b6edd4c9520","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6803","nonce":3068485812}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6805","nonce":3068485812}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6809","nonce":3068485812}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6807","nonce":3068485812}]},"public_addr":"192.168.123.104:6803/3068485812","cluster_addr":"192.168.123.104:6805/3068485812","heartbeat_back_addr":"192.168.123.104:6809/3068485812","heartbeat_front_addr":"192.168.123.104:6807/3068485812","state":["exists","up"]},{"osd":1,"uuid":"9e1d4c46-2510-4f16-8459-2bdfc6731f12","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6801","nonce":3343442690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6803","nonce":3343442690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6807","nonce":3343442690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6805","nonce":3343442690}]},"public_addr":"192.168.123.106:6801/3343442690","cluster_addr":"192.168.123.106:6803/3343442690","heartbeat_back_addr":"192.168.123.106:6807/3343442690","heartbeat_front_addr":"192.168.123.106:6805/3343442690","state":["exists","up"]},{"osd":2,"uuid":"fa5afa75-44db-4f6b-9c47-cdbdb9647e87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6801","nonce":3964953352}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6803","nonce":3964953352}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6807","nonce":3964953352}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6805","nonce":3964953352}]},"public_addr":"192.168.123.108:6801/3964953352","cluster_addr":"192.168.123.108:6803/3964953352","heartbeat_back_addr":"192.168.123.108:6807/3964953352","heartbeat_front_addr":"192.168.123.108:6805/3964953352","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:53:57.020677+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:07.252490+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:17.820753+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/1671790581":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/2968902969":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6801/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6800/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/1123864557":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6801/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6800/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/852751347":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/937056356":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/2550988868":"2026-03-11T05:53:04.807386+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:54:22.052 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T05:54:22.052 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd dump --format=json 2026-03-10T05:54:22.222 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:22.472 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:22.472 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":21,"fsid":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","created":"2026-03-10T05:52:53.393264+0000","modified":"2026-03-10T05:54:22.425320+0000","last_up_change":"2026-03-10T05:54:19.413219+0000","last_in_change":"2026-03-10T05:54:08.797192+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T05:54:21.275045+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"08df3d5e-0cfd-417f-8237-9b6edd4c9520","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6803","nonce":3068485812}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6805","nonce":3068485812}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6809","nonce":3068485812}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":3068485812},{"type":"v1","addr":"192.168.123.104:6807","nonce":3068485812}]},"public_addr":"192.168.123.104:6803/3068485812","cluster_addr":"192.168.123.104:6805/3068485812","heartbeat_back_addr":"192.168.123.104:6809/3068485812","heartbeat_front_addr":"192.168.123.104:6807/3068485812","state":["exists","up"]},{"osd":1,"uuid":"9e1d4c46-2510-4f16-8459-2bdfc6731f12","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6801","nonce":3343442690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6803","nonce":3343442690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6807","nonce":3343442690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3343442690},{"type":"v1","addr":"192.168.123.106:6805","nonce":3343442690}]},"public_addr":"192.168.123.106:6801/3343442690","cluster_addr":"192.168.123.106:6803/3343442690","heartbeat_back_addr":"192.168.123.106:6807/3343442690","heartbeat_front_addr":"192.168.123.106:6805/3343442690","state":["exists","up"]},{"osd":2,"uuid":"fa5afa75-44db-4f6b-9c47-cdbdb9647e87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6801","nonce":3964953352}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6803","nonce":3964953352}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6807","nonce":3964953352}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":3964953352},{"type":"v1","addr":"192.168.123.108:6805","nonce":3964953352}]},"public_addr":"192.168.123.108:6801/3964953352","cluster_addr":"192.168.123.108:6803/3964953352","heartbeat_back_addr":"192.168.123.108:6807/3964953352","heartbeat_front_addr":"192.168.123.108:6805/3964953352","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:53:57.020677+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:07.252490+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:54:17.820753+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/1671790581":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/2968902969":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6801/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:6800/3024521269":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/1123864557":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6801/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:6800/255743450":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/852751347":"2026-03-11T05:53:15.228547+0000","192.168.123.104:0/937056356":"2026-03-11T05:53:04.807386+0000","192.168.123.104:0/2550988868":"2026-03-11T05:53:04.807386+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:54:22.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:22.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 ceph-mon[50920]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T05:54:22.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:22.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3764149381' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:22.668 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph tell osd.0 flush_pg_stats 2026-03-10T05:54:22.668 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph tell osd.1 flush_pg_stats 2026-03-10T05:54:22.669 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph tell osd.2 flush_pg_stats 2026-03-10T05:54:22.719 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63830]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T05:54:22.719 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63830]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:22.719 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:22.719 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63830]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:22.719 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:22.719 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 ceph-mon[56706]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T05:54:22.719 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:22.719 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3764149381' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 ceph-mon[53504]: osdmap e20: 3 total, 3 up, 3 in 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3764149381' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60509]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60509]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60509]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:22.805 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60505]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T05:54:22.805 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60505]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:22.805 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:22.805 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 05:54:22 vm08 sudo[60505]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:22.924 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:22.962 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67686]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T05:54:22.962 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67686]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:22.962 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:22.962 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67686]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:22.962 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67682]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T05:54:22.962 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67682]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:22.962 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:22.962 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 05:54:22 vm04 sudo[67682]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:22.978 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:23.063 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:23.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63834]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T05:54:23.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63834]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T05:54:23.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T05:54:23.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:22 vm06 sudo[63834]: pam_unix(sudo:session): session closed for user root 2026-03-10T05:54:23.349 INFO:teuthology.orchestra.run.vm04.stdout:77309411330 2026-03-10T05:54:23.349 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd last-stat-seq osd.2 2026-03-10T05:54:23.450 INFO:teuthology.orchestra.run.vm04.stdout:60129542149 2026-03-10T05:54:23.450 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd last-stat-seq osd.1 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/4041911755' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:23.479 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:23 vm04 ceph-mon[50920]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T05:54:23.525 INFO:teuthology.orchestra.run.vm04.stdout:38654705671 2026-03-10T05:54:23.525 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd last-stat-seq osd.0 2026-03-10T05:54:23.595 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:23.754 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:23.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:23.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/4041911755' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:23.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:23 vm08 ceph-mon[53504]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T05:54:23.856 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: osdmap e21: 3 total, 3 up, 3 in 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/4041911755' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:23.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:23.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:54:23.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:54:23.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:23.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:23 vm06 ceph-mon[56706]: osdmap e22: 3 total, 3 up, 3 in 2026-03-10T05:54:23.904 INFO:teuthology.orchestra.run.vm04.stdout:77309411330 2026-03-10T05:54:24.062 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411330 got 77309411330 for osd.2 2026-03-10T05:54:24.062 DEBUG:teuthology.parallel:result is None 2026-03-10T05:54:24.132 INFO:teuthology.orchestra.run.vm04.stdout:60129542149 2026-03-10T05:54:24.161 INFO:teuthology.orchestra.run.vm04.stdout:38654705670 2026-03-10T05:54:24.308 INFO:tasks.cephadm.ceph_manager.ceph:need seq 60129542149 got 60129542149 for osd.1 2026-03-10T05:54:24.308 DEBUG:teuthology.parallel:result is None 2026-03-10T05:54:24.327 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705671 got 38654705670 for osd.0 2026-03-10T05:54:24.518 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:24 vm04 ceph-mon[50920]: mgrmap e15: a(active, since 68s), standbys: b 2026-03-10T05:54:24.518 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:24 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/4226109794' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:54:24.518 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:24 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/444588043' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:54:24.518 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:24 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/430232515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:24.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:24 vm08 ceph-mon[53504]: mgrmap e15: a(active, since 68s), standbys: b 2026-03-10T05:54:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:24 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/4226109794' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:54:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:24 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/444588043' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:54:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:24 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/430232515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:24.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:24 vm06 ceph-mon[56706]: mgrmap e15: a(active, since 68s), standbys: b 2026-03-10T05:54:24.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:24 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/4226109794' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:54:24.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:24 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/444588043' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:54:24.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:24 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/430232515' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:25.328 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph osd last-stat-seq osd.0 2026-03-10T05:54:25.507 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:25.607 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:25 vm04 ceph-mon[50920]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:25.788 INFO:teuthology.orchestra.run.vm04.stdout:38654705672 2026-03-10T05:54:25.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:25 vm06 ceph-mon[56706]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:26.017 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705671 got 38654705672 for osd.0 2026-03-10T05:54:26.017 DEBUG:teuthology.parallel:result is None 2026-03-10T05:54:26.017 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T05:54:26.017 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph pg dump --format=json 2026-03-10T05:54:26.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:25 vm08 ceph-mon[53504]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:26.187 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:26.434 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:26.435 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-10T05:54:26.610 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":47,"stamp":"2026-03-10T05:54:25.243943+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":1,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82692,"kb_used_data":1900,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819580,"statfs":{"total":64411926528,"available":64327249920,"internally_reserved":0,"allocated":1945600,"data_stored":1546158,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[1],"upper_bound":2},"perf_stat":{"commit_latency_ms":15,"apply_latency_ms":15,"commit_latency_ns":15000000,"apply_latency_ns":15000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000269"},"pg_stats":[{"pgid":"1.0","version":"21'32","reported_seq":57,"reported_epoch":22,"state":"active+clean","last_fresh":"2026-03-10T05:54:23.465205+0000","last_change":"2026-03-10T05:54:22.452877+0000","last_active":"2026-03-10T05:54:23.465205+0000","last_peered":"2026-03-10T05:54:23.465205+0000","last_clean":"2026-03-10T05:54:23.465205+0000","last_became_active":"2026-03-10T05:54:22.452628+0000","last_became_peered":"2026-03-10T05:54:22.452628+0000","last_unstale":"2026-03-10T05:54:23.465205+0000","last_undegraded":"2026-03-10T05:54:23.465205+0000","last_fullsized":"2026-03-10T05:54:23.465205+0000","mapping_epoch":20,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":21,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T05:54:21.419383+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T05:54:21.419383+0000","last_clean_scrub_stamp":"2026-03-10T05:54:21.419383+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:07:00.794854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411330,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":643072,"data_stored":512868,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":8,"apply_latency_ms":8,"commit_latency_ns":8000000,"apply_latency_ns":8000000},"alerts":[]},{"osd":1,"up_from":14,"seq":60129542150,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":651264,"data_stored":516645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[1],"upper_bound":2},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705672,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27612,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939812,"statfs":{"total":21470642176,"available":21442367488,"internally_reserved":0,"allocated":651264,"data_stored":516645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T05:54:26.610 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph pg dump --format=json 2026-03-10T05:54:26.808 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:26.832 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:26 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3787189307' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:26.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:26 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3787189307' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:27.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:26 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3787189307' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:54:27.066 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:27.066 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-10T05:54:27.222 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":47,"stamp":"2026-03-10T05:54:25.243943+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":1,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82692,"kb_used_data":1900,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819580,"statfs":{"total":64411926528,"available":64327249920,"internally_reserved":0,"allocated":1945600,"data_stored":1546158,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[1],"upper_bound":2},"perf_stat":{"commit_latency_ms":15,"apply_latency_ms":15,"commit_latency_ns":15000000,"apply_latency_ns":15000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000269"},"pg_stats":[{"pgid":"1.0","version":"21'32","reported_seq":57,"reported_epoch":22,"state":"active+clean","last_fresh":"2026-03-10T05:54:23.465205+0000","last_change":"2026-03-10T05:54:22.452877+0000","last_active":"2026-03-10T05:54:23.465205+0000","last_peered":"2026-03-10T05:54:23.465205+0000","last_clean":"2026-03-10T05:54:23.465205+0000","last_became_active":"2026-03-10T05:54:22.452628+0000","last_became_peered":"2026-03-10T05:54:22.452628+0000","last_unstale":"2026-03-10T05:54:23.465205+0000","last_undegraded":"2026-03-10T05:54:23.465205+0000","last_fullsized":"2026-03-10T05:54:23.465205+0000","mapping_epoch":20,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":21,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T05:54:21.419383+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T05:54:21.419383+0000","last_clean_scrub_stamp":"2026-03-10T05:54:21.419383+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:07:00.794854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411330,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27472,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939952,"statfs":{"total":21470642176,"available":21442510848,"internally_reserved":0,"allocated":643072,"data_stored":512868,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":8,"apply_latency_ms":8,"commit_latency_ns":8000000,"apply_latency_ns":8000000},"alerts":[]},{"osd":1,"up_from":14,"seq":60129542150,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":651264,"data_stored":516645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[1],"upper_bound":2},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705672,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27612,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939812,"statfs":{"total":21470642176,"available":21442367488,"internally_reserved":0,"allocated":651264,"data_stored":516645,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T05:54:27.222 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T05:54:27.222 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T05:54:27.222 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T05:54:27.222 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph health --format=json 2026-03-10T05:54:27.404 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:27.664 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:54:27.664 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T05:54:27.789 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:27 vm04 ceph-mon[50920]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:27.789 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:27 vm04 ceph-mon[50920]: from='client.24239 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:27.789 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:27 vm04 ceph-mon[50920]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:27.819 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T05:54:27.819 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T05:54:27.819 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T05:54:27.821 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm04.local 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- bash -c 'set -e 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> set -x 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch apply node-exporter 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch apply grafana 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch apply alertmanager 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch apply prometheus 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> sleep 240 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch ls 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch ps 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch host ls 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> MON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r '"'"'last | .daemon_name'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> GRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> PROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" '"'"'.[] | select(.hostname==$GRAFANA_HOST) | .addr'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> PROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" '"'"'.[] | select(.hostname==$PROM_HOST) | .addr'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" '"'"'.[] | select(.hostname==$ALERTM_HOST) | .addr'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> # check each host node-exporter metrics endpoint is responsive 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ALL_HOST_IPS=$(ceph orch host ls -f json | jq -r '"'"'.[] | .addr'"'"') 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> for ip in $ALL_HOST_IPS; do 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${ip}:9100/metric 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> done 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> # check grafana endpoints are responsive and database health is okay 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> curl -k -s https://${GRAFANA_IP}:3000/api/health 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '"'"'.database == "ok"'"'"' 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> # stop mon daemon in order to trigger an alert 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> ceph orch daemon stop $MON_DAEMON 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> sleep 120 2026-03-10T05:54:27.821 DEBUG:teuthology.orchestra.run.vm04:> # check prometheus endpoints are responsive and mon down alert is firing 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${PROM_IP}:9095/api/v1/status/config 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e '"'"'.status == "success"'"'"' 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${PROM_IP}:9095/api/v1/alerts 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '"'"'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"'"'"' 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> # check alertmanager endpoints are responsive and mon down alert is active 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${ALERTM_IP}:9093/api/v2/status 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '"'"'.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"'"'"' 2026-03-10T05:54:27.822 DEBUG:teuthology.orchestra.run.vm04:> ' 2026-03-10T05:54:27.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:27 vm06 ceph-mon[56706]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:27.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:27 vm06 ceph-mon[56706]: from='client.24239 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:27.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:27 vm06 ceph-mon[56706]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:27.996 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T05:54:28.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:27 vm08 ceph-mon[53504]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:28.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:27 vm08 ceph-mon[53504]: from='client.24239 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:54:28.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:27 vm08 ceph-mon[53504]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:28.076 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch apply node-exporter 2026-03-10T05:54:28.238 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled node-exporter update... 2026-03-10T05:54:28.248 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch apply grafana 2026-03-10T05:54:28.409 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled grafana update... 2026-03-10T05:54:28.419 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch apply alertmanager 2026-03-10T05:54:28.589 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled alertmanager update... 2026-03-10T05:54:28.601 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch apply prometheus 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='client.? 192.168.123.104:0/3139003075' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='client.24262 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: Saving service node-exporter spec with placement * 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:28.720 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:28 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:28.797 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled prometheus update... 2026-03-10T05:54:28.808 INFO:teuthology.orchestra.run.vm04.stderr:+ sleep 240 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='client.? 192.168.123.104:0/3139003075' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='client.24262 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: Saving service node-exporter spec with placement * 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:28 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='client.? 192.168.123.104:0/3139003075' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='client.24262 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: Saving service node-exporter spec with placement * 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:29.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:28 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: Saving service grafana spec with placement count:1 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: Deploying daemon node-exporter.vm04 on vm04 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: Saving service alertmanager spec with placement count:1 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: Saving service prometheus spec with placement count:1 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:29.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:29 vm06 ceph-mon[56706]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: Saving service grafana spec with placement count:1 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: Deploying daemon node-exporter.vm04 on vm04 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: Saving service alertmanager spec with placement count:1 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: Saving service prometheus spec with placement count:1 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:30.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:29 vm08 ceph-mon[53504]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: Saving service grafana spec with placement count:1 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: Deploying daemon node-exporter.vm04 on vm04 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: Saving service alertmanager spec with placement count:1 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: Saving service prometheus spec with placement count:1 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:30.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:29 vm04 ceph-mon[50920]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:32 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:32 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:32 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:32 vm06 ceph-mon[56706]: Deploying daemon node-exporter.vm06 on vm06 2026-03-10T05:54:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:32 vm06 ceph-mon[56706]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:32.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:32 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:32 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:32 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:32 vm08 ceph-mon[53504]: Deploying daemon node-exporter.vm06 on vm06 2026-03-10T05:54:32.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:32 vm08 ceph-mon[53504]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:32.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:32 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:32 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:32 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:32.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:32 vm04 ceph-mon[50920]: Deploying daemon node-exporter.vm06 on vm06 2026-03-10T05:54:32.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:32 vm04 ceph-mon[50920]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:34.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:34 vm08 ceph-mon[53504]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:34.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:34 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:34 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:34 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:34 vm06 ceph-mon[56706]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:34.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:34 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:34 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:34 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:34 vm04 ceph-mon[50920]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:34.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:34 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:34 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:34.807 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:34 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:35.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:35 vm06 ceph-mon[56706]: Deploying daemon node-exporter.vm08 on vm08 2026-03-10T05:54:35.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:35 vm08 ceph-mon[53504]: Deploying daemon node-exporter.vm08 on vm08 2026-03-10T05:54:35.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:35 vm04 ceph-mon[50920]: Deploying daemon node-exporter.vm08 on vm08 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:36 vm06 ceph-mon[56706]: Deploying daemon grafana.vm04 on vm04 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:36 vm08 ceph-mon[53504]: Deploying daemon grafana.vm04 on vm04 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:36.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:36 vm04 ceph-mon[50920]: Deploying daemon grafana.vm04 on vm04 2026-03-10T05:54:37.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:37 vm06 ceph-mon[56706]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:37.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:37 vm08 ceph-mon[53504]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:37.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:37 vm04 ceph-mon[50920]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:40.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:40 vm08 ceph-mon[53504]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:40.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:40 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:40.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:40 vm04 ceph-mon[50920]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:40.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:40 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:40.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:40 vm06 ceph-mon[56706]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:40.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:40 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:41.674 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:41 vm04 ceph-mon[50920]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:41.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:41 vm08 ceph-mon[53504]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:41.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:41 vm06 ceph-mon[56706]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:43.600 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.600 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.600 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.600 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.600 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:43.793 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:43 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:43.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:43.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:43 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:44.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:43 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:44 vm08 ceph-mon[53504]: Deploying daemon alertmanager.vm08 on vm08 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:44 vm04 ceph-mon[50920]: Deploying daemon alertmanager.vm08 on vm08 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:45.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:44 vm06 ceph-mon[56706]: Deploying daemon alertmanager.vm08 on vm08 2026-03-10T05:54:46.440 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:46 vm08 ceph-mon[53504]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:46.440 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:46 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:46.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:46 vm04 ceph-mon[50920]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:46.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:46 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:46.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:46 vm06 ceph-mon[56706]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:46.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:46 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: Deploying daemon prometheus.vm06 on vm06 2026-03-10T05:54:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:48 vm06 ceph-mon[56706]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: Deploying daemon prometheus.vm06 on vm06 2026-03-10T05:54:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:48 vm08 ceph-mon[53504]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: Deploying daemon prometheus.vm06 on vm06 2026-03-10T05:54:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:48 vm04 ceph-mon[50920]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:50.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:50 vm04 ceph-mon[50920]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:50.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:50 vm06 ceph-mon[56706]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:51.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:50 vm08 ceph-mon[53504]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:51.781 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:51 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:51.781 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:51 vm06 ceph-mon[56706]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:52.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:51 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:52.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:51 vm08 ceph-mon[53504]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:52.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:51 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:52.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:51 vm04 ceph-mon[50920]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:53.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:53 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:53 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:53 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:53 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:54:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:53 vm08 ceph-mon[53504]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-mon[50920]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setuser ceph since I am not root 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ignoring --setgroup ceph since I am not root 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:53.642+0000 7f140b12d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:54:53.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:53 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:53.685+0000 7f140b12d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: ignoring --setuser ceph since I am not root 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: ignoring --setgroup ceph since I am not root 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:53.645+0000 7fd83b5d8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:53.687+0000 7fd83b5d8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:54:53.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:53 vm06 ceph-mon[56706]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:54.388 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:54.109+0000 7fd83b5d8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:54:54.450 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:54.103+0000 7f140b12d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-mon[50920]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-mon[50920]: mgrmap e16: a(active, since 98s), standbys: b 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:54.448+0000 7f140b12d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: from numpy import show_config as show_numpy_config 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:54.538+0000 7f140b12d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:54.582+0000 7f140b12d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:54:54.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:54 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:54.654+0000 7f140b12d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:54.436+0000 7fd83b5d8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: from numpy import show_config as show_numpy_config 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:54.525+0000 7fd83b5d8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:54.562+0000 7fd83b5d8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:54.632+0000 7fd83b5d8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-mon[56706]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:54:54.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:54 vm06 ceph-mon[56706]: mgrmap e16: a(active, since 98s), standbys: b 2026-03-10T05:54:55.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:54 vm08 ceph-mon[53504]: from='mgr.14150 192.168.123.104:0/3082773452' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:54:55.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:54 vm08 ceph-mon[53504]: mgrmap e16: a(active, since 98s), standbys: b 2026-03-10T05:54:55.386 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.123+0000 7fd83b5d8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:54:55.386 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.234+0000 7fd83b5d8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:54:55.386 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.273+0000 7fd83b5d8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:54:55.386 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.308+0000 7fd83b5d8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:54:55.386 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.347+0000 7fd83b5d8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:54:55.436 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.161+0000 7f140b12d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:54:55.437 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.273+0000 7f140b12d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:54:55.437 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.316+0000 7f140b12d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:54:55.437 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.351+0000 7f140b12d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:54:55.437 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.396+0000 7f140b12d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:54:55.638 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.384+0000 7fd83b5d8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:54:55.638 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.559+0000 7fd83b5d8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:54:55.638 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.611+0000 7fd83b5d8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:54:55.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.434+0000 7f140b12d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:54:55.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.610+0000 7f140b12d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:54:55.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.662+0000 7f140b12d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:54:56.116 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:55 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:55.834+0000 7fd83b5d8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:54:56.170 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:55 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:55.885+0000 7f140b12d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:54:56.381 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.114+0000 7fd83b5d8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:54:56.381 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.152+0000 7fd83b5d8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:54:56.381 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.192+0000 7fd83b5d8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:54:56.382 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.267+0000 7fd83b5d8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:54:56.382 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.303+0000 7fd83b5d8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:54:56.440 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.168+0000 7f140b12d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:54:56.440 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.205+0000 7f140b12d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:54:56.440 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.246+0000 7f140b12d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:54:56.441 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.323+0000 7f140b12d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:54:56.441 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.359+0000 7f140b12d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:54:56.632 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.379+0000 7fd83b5d8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:54:56.632 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.493+0000 7fd83b5d8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:54:56.696 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.438+0000 7f140b12d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:54:56.696 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.556+0000 7f140b12d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: Standby manager daemon b restarted 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: Standby manager daemon b started 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-mon[56706]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.630+0000 7fd83b5d8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.666+0000 7fd83b5d8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: [10/Mar/2026:05:54:56] ENGINE Bus STARTING 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: CherryPy Checker: 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: The Application mounted at '' has an empty config. 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: [10/Mar/2026:05:54:56] ENGINE Serving on http://:::9283 2026-03-10T05:54:56.888 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 05:54:56 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b[57700]: [10/Mar/2026:05:54:56] ENGINE Bus STARTED 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: Standby manager daemon b restarted 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: Standby manager daemon b started 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:54:57.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:56 vm08 ceph-mon[53504]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.693+0000 7f140b12d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:56.733+0000 7f140b12d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:54:56] ENGINE Bus STARTING 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: CherryPy Checker: 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: The Application mounted at '' has an empty config. 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:54:56] ENGINE Serving on http://:::9283 2026-03-10T05:54:57.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:54:56] ENGINE Bus STARTED 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: Standby manager daemon b restarted 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: Standby manager daemon b started 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T05:54:57.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:56 vm04 ceph-mon[50920]: from='mgr.? 192.168.123.106:0/2776302584' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: mgrmap e17: a(active, since 101s), standbys: b 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: Active manager daemon a restarted 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: Activating manager daemon a 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: mgrmap e18: a(active, starting, since 0.00894378s), standbys: b 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: Manager daemon a is now available 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:57.875 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:57 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: mgrmap e17: a(active, since 101s), standbys: b 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: Active manager daemon a restarted 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: Activating manager daemon a 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: mgrmap e18: a(active, starting, since 0.00894378s), standbys: b 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: Manager daemon a is now available 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:57.889 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:57.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:57.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:57.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:57.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:57.890 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:57 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: mgrmap e17: a(active, since 101s), standbys: b 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: Active manager daemon a restarted 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: Activating manager daemon a 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: mgrmap e18: a(active, starting, since 0.00894378s), standbys: b 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: Manager daemon a is now available 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T05:54:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:57 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: mgrmap e19: a(active, since 1.01316s), standbys: b 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: [10/Mar/2026:05:54:57] ENGINE Bus STARTING 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: [10/Mar/2026:05:54:58] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: [10/Mar/2026:05:54:58] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: [10/Mar/2026:05:54:58] ENGINE Bus STARTED 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: [10/Mar/2026:05:54:58] ENGINE Client ('192.168.123.104', 51038) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Adjusting osd_memory_target on vm08 to 2305M 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:58 vm06 ceph-mon[56706]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: mgrmap e19: a(active, since 1.01316s), standbys: b 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: [10/Mar/2026:05:54:57] ENGINE Bus STARTING 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: [10/Mar/2026:05:54:58] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: [10/Mar/2026:05:54:58] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: [10/Mar/2026:05:54:58] ENGINE Bus STARTED 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: [10/Mar/2026:05:54:58] ENGINE Client ('192.168.123.104', 51038) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Adjusting osd_memory_target on vm08 to 2305M 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:58 vm08 ceph-mon[53504]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: mgrmap e19: a(active, since 1.01316s), standbys: b 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: [10/Mar/2026:05:54:57] ENGINE Bus STARTING 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: [10/Mar/2026:05:54:58] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: [10/Mar/2026:05:54:58] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: [10/Mar/2026:05:54:58] ENGINE Bus STARTED 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: [10/Mar/2026:05:54:58] ENGINE Client ('192.168.123.104', 51038) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Adjusting osd_memory_target on vm08 to 2305M 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm04:/etc/ceph/ceph.conf 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:58 vm04 ceph-mon[50920]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.conf 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:54:59.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: mgrmap e20: a(active, since 2s), standbys: b 2026-03-10T05:54:59.974 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:54:59.974 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Reconfiguring grafana.vm04 (dependencies changed)... 2026-03-10T05:54:59.974 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:54:59 vm04 ceph-mon[50920]: Reconfiguring daemon grafana.vm04 on vm04 2026-03-10T05:55:00.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: mgrmap e20: a(active, since 2s), standbys: b 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Reconfiguring grafana.vm04 (dependencies changed)... 2026-03-10T05:55:00.305 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:54:59 vm08 ceph-mon[53504]: Reconfiguring daemon grafana.vm04 on vm04 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Updating vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/config/ceph.client.admin.keyring 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: mgrmap e20: a(active, since 2s), standbys: b 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Reconfiguring grafana.vm04 (dependencies changed)... 2026-03-10T05:55:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:54:59 vm06 ceph-mon[56706]: Reconfiguring daemon grafana.vm04 on vm04 2026-03-10T05:55:00.799 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:00] "GET /metrics HTTP/1.1" 200 20064 "" "Prometheus/2.51.0" 2026-03-10T05:55:01.691 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:01 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.691 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:01 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.691 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:01 vm08 ceph-mon[53504]: Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T05:55:01.691 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:01 vm08 ceph-mon[53504]: Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T05:55:01.691 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:01 vm08 ceph-mon[53504]: pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:01.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:01 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:01 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:01 vm06 ceph-mon[56706]: Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T05:55:01.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:01 vm06 ceph-mon[56706]: Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T05:55:01.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:01 vm06 ceph-mon[56706]: pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:01.972 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.972 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:01.972 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-mon[50920]: Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T05:55:01.972 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-mon[50920]: Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T05:55:01.972 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-mon[50920]: pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Bus STOPPING 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Bus STOPPED 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Bus STARTING 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Serving on http://:::9283 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Bus STARTED 2026-03-10T05:55:01.973 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:01 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:01] ENGINE Bus STOPPING 2026-03-10T05:55:02.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:02] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:55:02.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:02] ENGINE Bus STOPPED 2026-03-10T05:55:02.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:02] ENGINE Bus STARTING 2026-03-10T05:55:02.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:02] ENGINE Serving on http://:::9283 2026-03-10T05:55:02.556 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: [10/Mar/2026:05:55:02] ENGINE Bus STARTED 2026-03-10T05:55:02.652 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: mgrmap e21: a(active, since 4s), standbys: b 2026-03-10T05:55:02.652 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:02.652 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:03.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: mgrmap e21: a(active, since 4s), standbys: b 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:03.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: mgrmap e21: a(active, since 4s), standbys: b 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:03.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:55:04.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:03 vm08 ceph-mon[53504]: pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:04.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:03 vm04 ceph-mon[50920]: pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:04.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:03 vm06 ceph-mon[56706]: pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:06.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:05 vm06 ceph-mon[56706]: pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T05:55:06.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:05 vm08 ceph-mon[53504]: pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T05:55:06.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:05 vm04 ceph-mon[50920]: pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T05:55:08.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:07 vm06 ceph-mon[56706]: pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:55:08.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:07 vm08 ceph-mon[53504]: pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:55:08.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:07 vm04 ceph-mon[50920]: pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:55:10.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:09 vm06 ceph-mon[56706]: pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:55:10.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:09 vm08 ceph-mon[53504]: pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:55:10.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:09 vm04 ceph-mon[50920]: pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:55:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:10] "GET /metrics HTTP/1.1" 200 20064 "" "Prometheus/2.51.0" 2026-03-10T05:55:12.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:11 vm08 ceph-mon[53504]: pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:12.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:11 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:11 vm06 ceph-mon[56706]: pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:12.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:11 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:11 vm04 ceph-mon[50920]: pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:11 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:14.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:13 vm06 ceph-mon[56706]: pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:14.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:13 vm08 ceph-mon[53504]: pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:14.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:13 vm04 ceph-mon[50920]: pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:16.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:15 vm06 ceph-mon[56706]: pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:16.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:15 vm08 ceph-mon[53504]: pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:16.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:15 vm04 ceph-mon[50920]: pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T05:55:18.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:17 vm06 ceph-mon[56706]: pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:18.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:17 vm08 ceph-mon[53504]: pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:18.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:17 vm04 ceph-mon[50920]: pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:20.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:19 vm06 ceph-mon[56706]: pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:20.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:19 vm08 ceph-mon[53504]: pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:19 vm04 ceph-mon[50920]: pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:21.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:20] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T05:55:22.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:21 vm06 ceph-mon[56706]: pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:22.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:21 vm08 ceph-mon[53504]: pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:22.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:21 vm04 ceph-mon[50920]: pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:24.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:23 vm06 ceph-mon[56706]: pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:24.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:23 vm08 ceph-mon[53504]: pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:24.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:23 vm04 ceph-mon[50920]: pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:26.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:25 vm06 ceph-mon[56706]: pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:26.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:25 vm08 ceph-mon[53504]: pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:26.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:25 vm04 ceph-mon[50920]: pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:27.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:26 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:27.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:26 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:26 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:28.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:27 vm06 ceph-mon[56706]: pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:28.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:27 vm08 ceph-mon[53504]: pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:27 vm04 ceph-mon[50920]: pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:29 vm08 ceph-mon[53504]: pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:30.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:29 vm04 ceph-mon[50920]: pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:29 vm06 ceph-mon[56706]: pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:31.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:30] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-10T05:55:32.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:31 vm08 ceph-mon[53504]: pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:32.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:31 vm04 ceph-mon[50920]: pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:31 vm06 ceph-mon[56706]: pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:34.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:33 vm08 ceph-mon[53504]: pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:34.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:33 vm04 ceph-mon[50920]: pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:34.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:33 vm06 ceph-mon[56706]: pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:36.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:35 vm08 ceph-mon[53504]: pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:36.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:35 vm04 ceph-mon[50920]: pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:36.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:35 vm06 ceph-mon[56706]: pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:38.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:37 vm08 ceph-mon[53504]: pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:38.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:37 vm04 ceph-mon[50920]: pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:38.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:37 vm06 ceph-mon[56706]: pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:40.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:39 vm08 ceph-mon[53504]: pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:40.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:39 vm04 ceph-mon[50920]: pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:40.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:39 vm06 ceph-mon[56706]: pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:41.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:40 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:40] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-10T05:55:42.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:41 vm08 ceph-mon[53504]: pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:42.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:41 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:41 vm04 ceph-mon[50920]: pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:41 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:42.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:41 vm06 ceph-mon[56706]: pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:42.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:41 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:44.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:43 vm08 ceph-mon[53504]: pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:44.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:43 vm04 ceph-mon[50920]: pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:44.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:43 vm06 ceph-mon[56706]: pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:46.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:45 vm08 ceph-mon[53504]: pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:46.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:45 vm04 ceph-mon[50920]: pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:46.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:45 vm06 ceph-mon[56706]: pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:48.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:47 vm08 ceph-mon[53504]: pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:47 vm04 ceph-mon[50920]: pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:47 vm06 ceph-mon[56706]: pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:50.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:49 vm08 ceph-mon[53504]: pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:50.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:49 vm04 ceph-mon[50920]: pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:50.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:49 vm06 ceph-mon[56706]: pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:51.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:55:50 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:55:50] "GET /metrics HTTP/1.1" 200 21327 "" "Prometheus/2.51.0" 2026-03-10T05:55:52.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:51 vm08 ceph-mon[53504]: pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:52.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:51 vm04 ceph-mon[50920]: pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:52.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:51 vm06 ceph-mon[56706]: pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:54.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:53 vm08 ceph-mon[53504]: pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:54.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:53 vm04 ceph-mon[50920]: pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:54.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:53 vm06 ceph-mon[56706]: pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:56.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:55 vm08 ceph-mon[53504]: pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:56.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:55 vm04 ceph-mon[50920]: pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:56.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:55 vm06 ceph-mon[56706]: pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:57.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:56 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:57.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:56 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:57.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:56 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:58.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:57 vm08 ceph-mon[53504]: pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:58.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:57 vm04 ceph-mon[50920]: pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:55:58.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:57 vm06 ceph-mon[56706]: pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:00.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:55:59 vm08 ceph-mon[53504]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:55:59 vm04 ceph-mon[50920]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:55:59 vm06 ceph-mon[56706]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:01.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:00] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-10T05:56:02.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:01 vm08 ceph-mon[53504]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:01 vm04 ceph-mon[50920]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:02.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:01 vm06 ceph-mon[56706]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:02 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:03.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:02 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:56:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:02 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:03.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:02 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:56:03.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:03.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:03.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:02 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:03.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:02 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:56:04.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:03 vm08 ceph-mon[53504]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:04.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:03 vm04 ceph-mon[50920]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:04.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:03 vm06 ceph-mon[56706]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:06.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:06 vm08 ceph-mon[53504]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:06.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:06 vm04 ceph-mon[50920]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:06.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:06 vm06 ceph-mon[56706]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:08.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:08 vm08 ceph-mon[53504]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:08.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:08 vm04 ceph-mon[50920]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:08.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:08 vm06 ceph-mon[56706]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:10.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:10 vm08 ceph-mon[53504]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:10.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:10 vm04 ceph-mon[50920]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:10 vm06 ceph-mon[56706]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:10] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-10T05:56:12.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:12 vm08 ceph-mon[53504]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:12.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:12 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:12 vm04 ceph-mon[50920]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:12 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:12 vm06 ceph-mon[56706]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:12 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:14.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:14 vm08 ceph-mon[53504]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:14.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:14 vm04 ceph-mon[50920]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:14.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:14 vm06 ceph-mon[56706]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:16.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:16 vm08 ceph-mon[53504]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:16.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:16 vm04 ceph-mon[50920]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:16.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:16 vm06 ceph-mon[56706]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:18.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:18 vm08 ceph-mon[53504]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:18.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:18 vm04 ceph-mon[50920]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:18.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:18 vm06 ceph-mon[56706]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:20.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:20 vm08 ceph-mon[53504]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:20 vm04 ceph-mon[50920]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:20.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:20 vm06 ceph-mon[56706]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:21.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:20] "GET /metrics HTTP/1.1" 200 21327 "" "Prometheus/2.51.0" 2026-03-10T05:56:22.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:22 vm08 ceph-mon[53504]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:22.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:22 vm04 ceph-mon[50920]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:22.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:22 vm06 ceph-mon[56706]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:24.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:24 vm08 ceph-mon[53504]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:24.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:24 vm04 ceph-mon[50920]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:24.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:24 vm06 ceph-mon[56706]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:26.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:26 vm08 ceph-mon[53504]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:26.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:26 vm04 ceph-mon[50920]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:26.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:26 vm06 ceph-mon[56706]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:27.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:27 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:27 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:27 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:28.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:28 vm08 ceph-mon[53504]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:28 vm04 ceph-mon[50920]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:28.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:28 vm06 ceph-mon[56706]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:30 vm08 ceph-mon[53504]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:30.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:30 vm04 ceph-mon[50920]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:30 vm06 ceph-mon[56706]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:31.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:30] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-10T05:56:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:32 vm06 ceph-mon[56706]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:32.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:32 vm08 ceph-mon[53504]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:32.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:32 vm04 ceph-mon[50920]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:34.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:34 vm06 ceph-mon[56706]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:34.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:34 vm08 ceph-mon[53504]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:34.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:34 vm04 ceph-mon[50920]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:36.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:36 vm06 ceph-mon[56706]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:36.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:36 vm08 ceph-mon[53504]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:36.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:36 vm04 ceph-mon[50920]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:38.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:38 vm06 ceph-mon[56706]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:38.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:38 vm08 ceph-mon[53504]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:38.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:38 vm04 ceph-mon[50920]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:40.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:40 vm06 ceph-mon[56706]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:40.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:40 vm08 ceph-mon[53504]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:40.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:40 vm04 ceph-mon[50920]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:41.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:40 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:40] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-10T05:56:42.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:42 vm06 ceph-mon[56706]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:42.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:42 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:42.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:42 vm08 ceph-mon[53504]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:42.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:42 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:42.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:42 vm04 ceph-mon[50920]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:42.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:42 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:44.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:44 vm06 ceph-mon[56706]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:44.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:44 vm08 ceph-mon[53504]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:44.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:44 vm04 ceph-mon[50920]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:46.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:46 vm06 ceph-mon[56706]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:46.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:46 vm08 ceph-mon[53504]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:46.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:46 vm04 ceph-mon[50920]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:48 vm06 ceph-mon[56706]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:48.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:48 vm08 ceph-mon[53504]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:48.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:48 vm04 ceph-mon[50920]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:50.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:50 vm06 ceph-mon[56706]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:50.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:50 vm08 ceph-mon[53504]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:50.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:50 vm04 ceph-mon[50920]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:51.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:56:50 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:56:50] "GET /metrics HTTP/1.1" 200 21324 "" "Prometheus/2.51.0" 2026-03-10T05:56:52.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:52 vm06 ceph-mon[56706]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:52.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:52 vm08 ceph-mon[53504]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:52.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:52 vm04 ceph-mon[50920]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:54.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:54 vm06 ceph-mon[56706]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:54.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:54 vm08 ceph-mon[53504]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:54.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:54 vm04 ceph-mon[50920]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:56.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:56 vm06 ceph-mon[56706]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:56.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:56 vm08 ceph-mon[53504]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:56.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:56 vm04 ceph-mon[50920]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:57.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:58.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:56:58 vm06 ceph-mon[56706]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:58.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:56:58 vm08 ceph-mon[53504]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:56:58.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:56:58 vm04 ceph-mon[50920]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:00 vm06 ceph-mon[56706]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:00.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:00 vm08 ceph-mon[53504]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:00.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:00 vm04 ceph-mon[50920]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:01.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:00] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T05:57:02.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:02 vm08 ceph-mon[53504]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:02.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:02 vm04 ceph-mon[50920]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:02.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:02 vm06 ceph-mon[56706]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:03.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:03.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:03.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:03.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:03 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:57:03.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:03.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:03.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:03.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:03 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:57:03.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:03.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:03.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:03.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:03 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:57:04.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:04 vm08 ceph-mon[53504]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:04.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:04 vm04 ceph-mon[50920]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:04.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:04 vm06 ceph-mon[56706]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:06.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:06 vm08 ceph-mon[53504]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:06.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:06 vm04 ceph-mon[50920]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:06.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:06 vm06 ceph-mon[56706]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:08.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:08 vm08 ceph-mon[53504]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:08.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:08 vm04 ceph-mon[50920]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:08.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:08 vm06 ceph-mon[56706]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:10.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:10 vm08 ceph-mon[53504]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:10.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:10 vm04 ceph-mon[50920]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:10.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:10 vm06 ceph-mon[56706]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:10] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T05:57:12.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:12 vm08 ceph-mon[53504]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:12.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:12 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:12 vm04 ceph-mon[50920]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:12 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:12 vm06 ceph-mon[56706]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:12.638 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:12 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:14.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:14 vm08 ceph-mon[53504]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:14.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:14 vm04 ceph-mon[50920]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:14.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:14 vm06 ceph-mon[56706]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:16.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:16 vm08 ceph-mon[53504]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:16.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:16 vm04 ceph-mon[50920]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:16.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:16 vm06 ceph-mon[56706]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:18.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:18 vm08 ceph-mon[53504]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:18.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:18 vm04 ceph-mon[50920]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:18.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:18 vm06 ceph-mon[56706]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:20.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:20 vm08 ceph-mon[53504]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:20.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:20] "GET /metrics HTTP/1.1" 200 21338 "" "Prometheus/2.51.0" 2026-03-10T05:57:20.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:20 vm04 ceph-mon[50920]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:20.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:20 vm06 ceph-mon[56706]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:22.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:22 vm08 ceph-mon[53504]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:22.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:22 vm04 ceph-mon[50920]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:22.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:22 vm06 ceph-mon[56706]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:24.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:24 vm08 ceph-mon[53504]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:24.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:24 vm04 ceph-mon[50920]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:24.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:24 vm06 ceph-mon[56706]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:26.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:26 vm08 ceph-mon[53504]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:26.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:26 vm04 ceph-mon[50920]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:26.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:26 vm06 ceph-mon[56706]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:27.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:27 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:27.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:27 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:27.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:27 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:28.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:28 vm08 ceph-mon[53504]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:28.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:28 vm04 ceph-mon[50920]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:28.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:28 vm06 ceph-mon[56706]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:30.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:30 vm08 ceph-mon[53504]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:30.806 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:30] "GET /metrics HTTP/1.1" 200 21338 "" "Prometheus/2.51.0" 2026-03-10T05:57:30.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:30 vm04 ceph-mon[50920]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:30.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:30 vm06 ceph-mon[56706]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:32.804 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:32 vm08 ceph-mon[53504]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:32.806 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:32 vm04 ceph-mon[50920]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:32.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:32 vm06 ceph-mon[56706]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:34.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:34 vm06 ceph-mon[56706]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:35.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:34 vm08 ceph-mon[53504]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:35.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:34 vm04 ceph-mon[50920]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:36.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:36 vm06 ceph-mon[56706]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:37.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:36 vm08 ceph-mon[53504]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:37.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:36 vm04 ceph-mon[50920]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:38.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:38 vm06 ceph-mon[56706]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:39.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:38 vm08 ceph-mon[53504]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:39.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:38 vm04 ceph-mon[50920]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:40.834 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:40 vm06 ceph-mon[56706]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:40.856 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:40 vm08 ceph-mon[53504]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:40.858 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:40 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:40] "GET /metrics HTTP/1.1" 200 21338 "" "Prometheus/2.51.0" 2026-03-10T05:57:40.858 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:40 vm04 ceph-mon[50920]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:42.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:42 vm06 ceph-mon[56706]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:42.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:42 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:43.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:42 vm08 ceph-mon[53504]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:43.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:42 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:43.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:42 vm04 ceph-mon[50920]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:43.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:42 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:44.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:44 vm06 ceph-mon[56706]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:45.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:44 vm08 ceph-mon[53504]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:45.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:44 vm04 ceph-mon[50920]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:46.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:46 vm06 ceph-mon[56706]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:47.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:46 vm08 ceph-mon[53504]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:47.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:46 vm04 ceph-mon[50920]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:48.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:48 vm06 ceph-mon[56706]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:49.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:48 vm08 ceph-mon[53504]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:49.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:48 vm04 ceph-mon[50920]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:50.888 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:50 vm06 ceph-mon[56706]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:51.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:50 vm08 ceph-mon[53504]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:51.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:57:50 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:57:50] "GET /metrics HTTP/1.1" 200 21335 "" "Prometheus/2.51.0" 2026-03-10T05:57:51.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:50 vm04 ceph-mon[50920]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:52.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:51 vm08 ceph-mon[53504]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:52.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:51 vm04 ceph-mon[50920]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:52.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:51 vm06 ceph-mon[56706]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:54.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:53 vm08 ceph-mon[53504]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:54.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:53 vm04 ceph-mon[50920]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:54.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:53 vm06 ceph-mon[56706]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:56.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:55 vm08 ceph-mon[53504]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:56.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:55 vm04 ceph-mon[50920]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:56.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:55 vm06 ceph-mon[56706]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:58.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:57 vm08 ceph-mon[53504]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:58.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:57 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:58.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:57 vm04 ceph-mon[50920]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:58.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:58.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:57 vm06 ceph-mon[56706]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:57:58.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:00.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:57:59 vm08 ceph-mon[53504]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:00.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:57:59 vm04 ceph-mon[50920]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:00.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:57:59 vm06 ceph-mon[56706]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:01.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:00] "GET /metrics HTTP/1.1" 200 21334 "" "Prometheus/2.51.0" 2026-03-10T05:58:02.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:01 vm08 ceph-mon[53504]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:02.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:01 vm04 ceph-mon[50920]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:02.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:01 vm06 ceph-mon[56706]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:04.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:03 vm08 ceph-mon[53504]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:04.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:04.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:04.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:03 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:04.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:03 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:04.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:03 vm04 ceph-mon[50920]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:04.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:04.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:04.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:03 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:04.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:03 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:04.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:03 vm06 ceph-mon[56706]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:04.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:04.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:04.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:03 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:04.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:03 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:06.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:05 vm04 ceph-mon[50920]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:06.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:05 vm06 ceph-mon[56706]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:06.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:05 vm08 ceph-mon[53504]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:08.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:07 vm06 ceph-mon[56706]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:08.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:07 vm08 ceph-mon[53504]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:08.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:07 vm04 ceph-mon[50920]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:10.106 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:09 vm08 ceph-mon[53504]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:10.109 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:09 vm04 ceph-mon[50920]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:10.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:09 vm06 ceph-mon[56706]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:10] "GET /metrics HTTP/1.1" 200 21334 "" "Prometheus/2.51.0" 2026-03-10T05:58:12.091 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:11 vm06 ceph-mon[56706]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:12.091 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:11 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:12.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:11 vm08 ceph-mon[53504]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:12.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:11 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:11 vm04 ceph-mon[50920]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:11 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:14.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:13 vm06 ceph-mon[56706]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:14.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:13 vm08 ceph-mon[53504]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:14.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:13 vm04 ceph-mon[50920]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:16.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:15 vm06 ceph-mon[56706]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:16.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:15 vm08 ceph-mon[53504]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:16.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:15 vm04 ceph-mon[50920]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:18.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:17 vm06 ceph-mon[56706]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:18.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:17 vm08 ceph-mon[53504]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:18.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:17 vm04 ceph-mon[50920]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:20.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:19 vm06 ceph-mon[56706]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:20.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:19 vm08 ceph-mon[53504]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:19 vm04 ceph-mon[50920]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:20.807 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:20] "GET /metrics HTTP/1.1" 200 21336 "" "Prometheus/2.51.0" 2026-03-10T05:58:22.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:21 vm06 ceph-mon[56706]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:22.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:21 vm08 ceph-mon[53504]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:22.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:21 vm04 ceph-mon[50920]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:24.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:23 vm06 ceph-mon[56706]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:24.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:23 vm08 ceph-mon[53504]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:24.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:23 vm04 ceph-mon[50920]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:26.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:25 vm06 ceph-mon[56706]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:26.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:25 vm08 ceph-mon[53504]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:26.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:25 vm04 ceph-mon[50920]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:27.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:26 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:26 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:27.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:26 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:28.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:27 vm08 ceph-mon[53504]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:27 vm04 ceph-mon[50920]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:28.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:27 vm06 ceph-mon[56706]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:28.811 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch ls 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:alertmanager ?:9093,9094 1/1 3m ago 4m count:1 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:grafana ?:3000 1/1 3m ago 4m count:1 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:mgr 2/2 3m ago 4m vm04=a;vm06=b;count:2 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:mon 3/3 3m ago 5m vm04:192.168.123.104=a;vm06:192.168.123.106=b;vm08:192.168.123.108=c;count:3 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:node-exporter ?:9100 3/3 3m ago 4m * 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:osd 3 3m ago - 2026-03-10T05:58:28.966 INFO:teuthology.orchestra.run.vm04.stdout:prometheus ?:9095 1/1 3m ago 4m count:1 2026-03-10T05:58:28.973 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch ps 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:alertmanager.vm08 vm08 *:9093,9094 running (3m) 3m ago 3m 15.5M - 0.25.0 c8568f914cd2 a849d480ce1e 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:grafana.vm04 vm04 *:3000 running (3m) 3m ago 3m 60.7M - 10.4.0 c8b91775d855 e971e866cd6a 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:mgr.a vm04 *:9283,8765 running (5m) 3m ago 5m 542M - 19.2.3-678-ge911bdeb 654f31e6858e 6600e7271873 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:mgr.b vm06 *:8443,8765 running (4m) 3m ago 4m 485M - 19.2.3-678-ge911bdeb 654f31e6858e c2d1e5eb8ed7 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:mon.a vm04 running (5m) 3m ago 5m 48.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e f5dff0ec46a7 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:mon.b vm06 running (4m) 3m ago 4m 45.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e fa39630c74b7 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:mon.c vm08 running (4m) 3m ago 4m 43.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 013927a7d5a3 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:node-exporter.vm04 vm04 *:9100 running (3m) 3m ago 3m 8804k - 1.7.0 72c9c2088986 592046c4eb8d 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:node-exporter.vm06 vm06 *:9100 running (3m) 3m ago 3m 5410k - 1.7.0 72c9c2088986 b7345b0163a7 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:node-exporter.vm08 vm08 *:9100 running (3m) 3m ago 3m 8174k - 1.7.0 72c9c2088986 ee2cdaf0ab1d 2026-03-10T05:58:29.123 INFO:teuthology.orchestra.run.vm04.stdout:osd.0 vm04 running (4m) 3m ago 4m 59.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cea0e1ddd5e1 2026-03-10T05:58:29.124 INFO:teuthology.orchestra.run.vm04.stdout:osd.1 vm06 running (4m) 3m ago 4m 61.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 37f6423bc9ff 2026-03-10T05:58:29.124 INFO:teuthology.orchestra.run.vm04.stdout:osd.2 vm08 running (4m) 3m ago 4m 35.7M 2305M 19.2.3-678-ge911bdeb 654f31e6858e 12584453ec00 2026-03-10T05:58:29.124 INFO:teuthology.orchestra.run.vm04.stdout:prometheus.vm06 vm06 *:9095 running (3m) 3m ago 3m 25.0M - 2.51.0 1d3b7f56885b 87eff8faac88 2026-03-10T05:58:29.133 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch host ls 2026-03-10T05:58:29.290 INFO:teuthology.orchestra.run.vm04.stdout:HOST ADDR LABELS STATUS 2026-03-10T05:58:29.290 INFO:teuthology.orchestra.run.vm04.stdout:vm04 192.168.123.104 2026-03-10T05:58:29.290 INFO:teuthology.orchestra.run.vm04.stdout:vm06 192.168.123.106 2026-03-10T05:58:29.290 INFO:teuthology.orchestra.run.vm04.stdout:vm08 192.168.123.108 2026-03-10T05:58:29.290 INFO:teuthology.orchestra.run.vm04.stdout:3 hosts in cluster 2026-03-10T05:58:29.299 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch ps --daemon-type mon -f json 2026-03-10T05:58:29.300 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r 'last | .daemon_name' 2026-03-10T05:58:29.459 INFO:teuthology.orchestra.run.vm04.stderr:+ MON_DAEMON=mon.c 2026-03-10T05:58:29.459 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch ps --daemon-type grafana -f json 2026-03-10T05:58:29.460 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r .hostname 2026-03-10T05:58:29.461 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -e '.[]' 2026-03-10T05:58:29.625 INFO:teuthology.orchestra.run.vm04.stderr:+ GRAFANA_HOST=vm04 2026-03-10T05:58:29.625 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch ps --daemon-type prometheus -f json 2026-03-10T05:58:29.625 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r .hostname 2026-03-10T05:58:29.627 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -e '.[]' 2026-03-10T05:58:29.790 INFO:teuthology.orchestra.run.vm04.stderr:+ PROM_HOST=vm06 2026-03-10T05:58:29.790 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch ps --daemon-type alertmanager -f json 2026-03-10T05:58:29.791 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r .hostname 2026-03-10T05:58:29.792 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -e '.[]' 2026-03-10T05:58:29.963 INFO:teuthology.orchestra.run.vm04.stderr:+ ALERTM_HOST=vm08 2026-03-10T05:58:29.964 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch host ls -f json 2026-03-10T05:58:29.964 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r --arg GRAFANA_HOST vm04 '.[] | select(.hostname==$GRAFANA_HOST) | .addr' 2026-03-10T05:58:30.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:29 vm04 ceph-mon[50920]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:30.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:29 vm04 ceph-mon[50920]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:29 vm04 ceph-mon[50920]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:29 vm04 ceph-mon[50920]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:29 vm04 ceph-mon[50920]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:30.134 INFO:teuthology.orchestra.run.vm04.stderr:+ GRAFANA_IP=192.168.123.104 2026-03-10T05:58:30.134 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch host ls -f json 2026-03-10T05:58:30.134 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r --arg PROM_HOST vm06 '.[] | select(.hostname==$PROM_HOST) | .addr' 2026-03-10T05:58:30.298 INFO:teuthology.orchestra.run.vm04.stderr:+ PROM_IP=192.168.123.106 2026-03-10T05:58:30.299 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch host ls -f json 2026-03-10T05:58:30.299 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r --arg ALERTM_HOST vm08 '.[] | select(.hostname==$ALERTM_HOST) | .addr' 2026-03-10T05:58:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:29 vm08 ceph-mon[53504]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:29 vm08 ceph-mon[53504]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:29 vm08 ceph-mon[53504]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:29 vm08 ceph-mon[53504]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.304 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:29 vm08 ceph-mon[53504]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:29 vm06 ceph-mon[56706]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:29 vm06 ceph-mon[56706]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:29 vm06 ceph-mon[56706]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:29 vm06 ceph-mon[56706]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:29 vm06 ceph-mon[56706]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:30.462 INFO:teuthology.orchestra.run.vm04.stderr:+ ALERTM_IP=192.168.123.108 2026-03-10T05:58:30.463 INFO:teuthology.orchestra.run.vm04.stderr:++ ceph orch host ls -f json 2026-03-10T05:58:30.463 INFO:teuthology.orchestra.run.vm04.stderr:++ jq -r '.[] | .addr' 2026-03-10T05:58:30.625 INFO:teuthology.orchestra.run.vm04.stderr:+ ALL_HOST_IPS='192.168.123.104 2026-03-10T05:58:30.625 INFO:teuthology.orchestra.run.vm04.stderr:192.168.123.106 2026-03-10T05:58:30.625 INFO:teuthology.orchestra.run.vm04.stderr:192.168.123.108' 2026-03-10T05:58:30.626 INFO:teuthology.orchestra.run.vm04.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T05:58:30.626 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.104:9100/metric 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: Node Exporter 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:

Node Exporter

2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:

Prometheus Node Exporter

2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T05:58:30.630 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout:
    2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout:
  • Metrics
  • 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T05:58:30.631 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.106:9100/metric 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: Node Exporter 2026-03-10T05:58:30.633 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:

Node Exporter

2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:

Prometheus Node Exporter

2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
    2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
  • Metrics
  • 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T05:58:30.634 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.108:9100/metric 2026-03-10T05:58:30.635 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: Node Exporter 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:

Node Exporter

2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:

Prometheus Node Exporter

2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
    2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
  • Metrics
  • 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout:
2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T05:58:30.636 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -k -s https://192.168.123.104:3000/api/health 2026-03-10T05:58:30.643 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-10T05:58:30.643 INFO:teuthology.orchestra.run.vm04.stdout: "commit": "03f502a94d17f7dc4e6c34acdf8428aedd986e4c", 2026-03-10T05:58:30.643 INFO:teuthology.orchestra.run.vm04.stdout: "database": "ok", 2026-03-10T05:58:30.643 INFO:teuthology.orchestra.run.vm04.stdout: "version": "10.4.0" 2026-03-10T05:58:30.644 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -k -s https://192.168.123.104:3000/api/health 2026-03-10T05:58:30.644 INFO:teuthology.orchestra.run.vm04.stderr:+ jq -e '.database == "ok"' 2026-03-10T05:58:30.652 INFO:teuthology.orchestra.run.vm04.stdout:}true 2026-03-10T05:58:30.652 INFO:teuthology.orchestra.run.vm04.stderr:+ ceph orch daemon stop mon.c 2026-03-10T05:58:30.740 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:30] "GET /metrics HTTP/1.1" 200 21331 "" "Prometheus/2.51.0" 2026-03-10T05:58:30.831 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled to stop mon.c on host 'vm08' 2026-03-10T05:58:30.841 INFO:teuthology.orchestra.run.vm04.stderr:+ sleep 120 2026-03-10T05:58:31.056 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:31.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:30 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:30 vm08 ceph-mon[53504]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 systemd[1]: Stopping Ceph mon.c for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-c[53468]: 2026-03-10T05:58:31.154+0000 7fa322dee640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T05:58:31.220 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-c[53468]: 2026-03-10T05:58:31.155+0000 7fa322dee640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:31.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:31.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:31.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:30 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:31.554 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 podman[63618]: 2026-03-10 05:58:31.21916965 +0000 UTC m=+0.077905185 container died 013927a7d5a3ec474f9a6176d98ac11c0835fc0d9cbc77791d20905537450477 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-c, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-10T05:58:31.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 podman[63618]: 2026-03-10 05:58:31.337868536 +0000 UTC m=+0.196604061 container remove 013927a7d5a3ec474f9a6176d98ac11c0835fc0d9cbc77791d20905537450477 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-c, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-10T05:58:31.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 bash[63618]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-c 2026-03-10T05:58:31.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 systemd[1]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.c.service: Deactivated successfully. 2026-03-10T05:58:31.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 systemd[1]: Stopped Ceph mon.c for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027. 2026-03-10T05:58:31.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 05:58:31 vm08 systemd[1]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.c.service: Consumed 2.310s CPU time. 2026-03-10T05:58:41.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:40 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:40] "GET /metrics HTTP/1.1" 200 21331 "" "Prometheus/2.51.0" 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: Schedule stop daemon mon.c 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: mon.b calling monitor election 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: mon.a calling monitor election 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: monmap epoch 3 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: min_mon_release 19 (squid) 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: election_strategy: 1 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: fsmap 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: mgrmap e21: a(active, since 3m), standbys: b 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='' 2026-03-10T05:58:47.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:47.308 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:46 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: Schedule stop daemon mon.c 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: mon.b calling monitor election 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: mon.a calling monitor election 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: monmap epoch 3 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: last_changed 2026-03-10T05:53:35.724486+0000 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: created 2026-03-10T05:52:52.167191+0000 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: min_mon_release 19 (squid) 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: election_strategy: 1 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: 2: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: fsmap 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: osdmap e23: 3 total, 3 up, 3 in 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: mgrmap e21: a(active, since 3m), standbys: b 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T05:58:47.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='' 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:47.390 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:46 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:47 vm04 ceph-mon[50920]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:48.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:47 vm06 ceph-mon[56706]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:50.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:50 vm04 ceph-mon[50920]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:50.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:50 vm06 ceph-mon[56706]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:51.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:58:50 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:58:50] "GET /metrics HTTP/1.1" 200 21331 "" "Prometheus/2.51.0" 2026-03-10T05:58:52.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:52 vm04 ceph-mon[50920]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:52.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:52 vm06 ceph-mon[56706]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:54.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:54 vm04 ceph-mon[50920]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:54.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:54 vm06 ceph-mon[56706]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:56.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:56 vm04 ceph-mon[50920]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:56.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:56 vm06 ceph-mon[56706]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:57 vm04 ceph-mon[50920]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:57 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:58.057 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:58.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:57 vm06 ceph-mon[56706]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:58:58.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:57 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:58:58.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:00.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:58:59 vm06 ceph-mon[56706]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:58:59 vm04 ceph-mon[50920]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:01.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:00] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-10T05:59:02.139 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:01 vm06 ceph-mon[56706]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:01 vm04 ceph-mon[50920]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:04.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:03 vm06 ceph-mon[56706]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:04.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:03 vm04 ceph-mon[50920]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:06.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:05 vm06 ceph-mon[56706]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:06.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:05 vm04 ceph-mon[50920]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:08.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:07 vm06 ceph-mon[56706]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:08.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:07 vm04 ceph-mon[50920]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:10.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:09 vm06 ceph-mon[56706]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:10.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:09 vm04 ceph-mon[50920]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:10] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-10T05:59:12.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:11 vm06 ceph-mon[56706]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:12.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:11 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:11 vm04 ceph-mon[50920]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:12.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:11 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:14.138 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:13 vm06 ceph-mon[56706]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:14.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:13 vm04 ceph-mon[50920]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:16.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:15 vm04 ceph-mon[50920]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:16.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:15 vm06 ceph-mon[56706]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:18.191 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:17 vm06 ceph-mon[56706]: pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:18.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:17 vm04 ceph-mon[50920]: pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:20.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:19 vm04 ceph-mon[50920]: pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:20.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:19 vm06 ceph-mon[56706]: pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:21.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:20] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T05:59:22.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:21 vm04 ceph-mon[50920]: pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:22.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:21 vm06 ceph-mon[56706]: pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:24.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:23 vm04 ceph-mon[50920]: pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:24.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:23 vm06 ceph-mon[56706]: pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:26.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:25 vm04 ceph-mon[50920]: pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:26.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:25 vm06 ceph-mon[56706]: pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:27.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:26 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:27.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:26 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:28.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:27 vm04 ceph-mon[50920]: pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:28.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:27 vm06 ceph-mon[56706]: pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:30.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:29 vm04 ceph-mon[50920]: pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:29 vm06 ceph-mon[56706]: pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:31.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:30] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T05:59:32.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:31 vm04 ceph-mon[50920]: pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:32.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:31 vm06 ceph-mon[56706]: pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:34.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:33 vm04 ceph-mon[50920]: pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:34.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:33 vm06 ceph-mon[56706]: pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:36.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:35 vm04 ceph-mon[50920]: pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:36.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:35 vm06 ceph-mon[56706]: pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:38.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:37 vm04 ceph-mon[50920]: pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:38.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:37 vm06 ceph-mon[56706]: pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:40.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:39 vm04 ceph-mon[50920]: pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:40.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:39 vm06 ceph-mon[56706]: pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:41.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:40 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:40] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T05:59:42.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:41 vm04 ceph-mon[50920]: pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:42.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:41 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:42.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:41 vm06 ceph-mon[56706]: pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:42.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:41 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:44.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:43 vm04 ceph-mon[50920]: pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:44.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:43 vm06 ceph-mon[56706]: pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:46.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:45 vm04 ceph-mon[50920]: pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:46.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:45 vm06 ceph-mon[56706]: pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:47.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:46 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:59:47.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:46 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:59:48.290 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:48 vm06 ceph-mon[56706]: pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:48.290 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:48 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:59:48.290 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:48 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:59:48.290 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:48 vm06 ceph-mon[56706]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:59:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:48 vm04 ceph-mon[50920]: pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:48 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:59:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:48 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:59:48.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:48 vm04 ceph-mon[50920]: from='mgr.14424 ' entity='mgr.a' 2026-03-10T05:59:50.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:50 vm04 ceph-mon[50920]: pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:50.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:50 vm06 ceph-mon[56706]: pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:51.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 05:59:50 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:05:59:50] "GET /metrics HTTP/1.1" 200 21392 "" "Prometheus/2.51.0" 2026-03-10T05:59:52.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:52 vm04 ceph-mon[50920]: pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:52.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:52 vm06 ceph-mon[56706]: pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:54.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:54 vm04 ceph-mon[50920]: pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:54.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:54 vm06 ceph-mon[56706]: pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:56.307 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:56 vm04 ceph-mon[50920]: pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:56.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:56 vm06 ceph-mon[56706]: pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:57.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:57 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:57.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:57 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:59:58.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 05:59:58 vm04 ceph-mon[50920]: pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:59:58.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 05:59:58 vm06 ceph-mon[56706]: pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:00 vm04 ceph-mon[50920]: pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:00.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:00 vm04 ceph-mon[50920]: overall HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T06:00:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:00 vm06 ceph-mon[56706]: pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:00.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:00 vm06 ceph-mon[56706]: overall HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T06:00:01.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:00 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:06:00:00] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T06:00:02.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:02 vm04 ceph-mon[50920]: pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:02.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:02 vm06 ceph-mon[56706]: pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:04.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:04 vm04 ceph-mon[50920]: pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:04.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:04 vm06 ceph-mon[56706]: pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:06.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:06 vm04 ceph-mon[50920]: pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:06.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:06 vm06 ceph-mon[56706]: pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:08.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:08 vm04 ceph-mon[50920]: pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:08.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:08 vm06 ceph-mon[56706]: pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:10.306 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:10 vm04 ceph-mon[50920]: pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:10.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:10 vm06 ceph-mon[56706]: pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:11.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:10 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:06:00:10] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T06:00:12.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:12 vm06 ceph-mon[56706]: pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:12.389 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:12 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T06:00:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:12 vm04 ceph-mon[50920]: pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:12.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:12 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T06:00:14.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:14 vm06 ceph-mon[56706]: pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:14.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:14 vm04 ceph-mon[50920]: pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:16.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:16 vm06 ceph-mon[56706]: pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:16.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:16 vm04 ceph-mon[50920]: pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:18.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:18 vm06 ceph-mon[56706]: pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:18.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:18 vm04 ceph-mon[50920]: pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:20.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:20 vm06 ceph-mon[56706]: pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:20.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:20 vm04 ceph-mon[50920]: pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:21.056 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:20 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:06:00:20] "GET /metrics HTTP/1.1" 200 21393 "" "Prometheus/2.51.0" 2026-03-10T06:00:22.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:22 vm06 ceph-mon[56706]: pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:22.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:22 vm04 ceph-mon[50920]: pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:24.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:24 vm06 ceph-mon[56706]: pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:24.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:24 vm04 ceph-mon[50920]: pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:26.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:26 vm06 ceph-mon[56706]: pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:26.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:26 vm04 ceph-mon[50920]: pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:27.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:27 vm06 ceph-mon[56706]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T06:00:27.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:27 vm04 ceph-mon[50920]: from='mgr.14424 192.168.123.104:0/4208529824' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T06:00:28.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:28 vm06 ceph-mon[56706]: pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:28.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:28 vm04 ceph-mon[50920]: pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:30.388 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:30 vm06 ceph-mon[56706]: pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:30.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:30 vm04 ceph-mon[50920]: pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T06:00:30.843 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.106:9095/api/v1/status/config 2026-03-10T06:00:30.849 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.106:9095/api/v1/status/config 2026-03-10T06:00:30.849 INFO:teuthology.orchestra.run.vm04.stderr:+ jq -e '.status == "success"' 2026-03-10T06:00:30.851 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"success","data":{"yaml":"global:\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n evaluation_interval: 10s\n external_labels:\n cluster: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027\nalerting:\n alertmanagers:\n - follow_redirects: true\n enable_http2: true\n scheme: http\n timeout: 10s\n api_version: v2\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=alertmanager\nrule_files:\n- /etc/prometheus/alerting/*\nscrape_configs:\n- job_name: ceph\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027\n action: replace\n - source_labels: [instance]\n separator: ;\n regex: (.*)\n target_label: instance\n replacement: ceph_cluster\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=mgr-prometheus\n- job_name: node\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=node-exporter\n- job_name: ceph-exporter\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=ceph-exporter\n- job_name: nvmeof\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=nvmeof\n- job_name: nfs\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.104:8765/sd/prometheus/sd-config?service=nfs\n- job_name: federate\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n params:\n match[]:\n - '{job=\"ceph\"}'\n - '{job=\"node\"}'\n - '{job=\"haproxy\"}'\n - '{job=\"ceph-exporter\"}'\n scrape_interval: 15s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /federate\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n static_configs:\n - targets: []\n"}}true 2026-03-10T06:00:30.851 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.106:9095/api/v1/alerts 2026-03-10T06:00:30.854 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.106:9095/api/v1/alerts 2026-03-10T06:00:30.854 INFO:teuthology.orchestra.run.vm04.stderr:+ jq -e '.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"' 2026-03-10T06:00:30.857 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"success","data":{"alerts":[{"labels":{"alertname":"CephMonDownQuorumAtRisk","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"},"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"state":"firing","activeAt":"2026-03-10T05:59:02.639590217Z","value":"1e+00"},{"labels":{"alertname":"CephMonDown","severity":"warning","type":"ceph_default"},"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"state":"firing","activeAt":"2026-03-10T05:59:02.639590217Z","value":"1e+00"},{"labels":{"alertname":"CephHealthWarning","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","instance":"ceph_cluster","job":"ceph","severity":"warning","type":"ceph_default"},"annotations":{"description":"The cluster state has been HEALTH_WARN for more than 15 minutes. Please check 'ceph health detail' for more information.","summary":"Ceph is in the WARNING state"},"state":"pending","activeAt":"2026-03-10T05:59:03.558815712Z","value":"1e+00"},{"labels":{"alertname":"CephNodeDiskspaceWarning","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","device":"/dev/vda1","fstype":"xfs","instance":"vm06","job":"node","mountpoint":"/","nodename":"vm06","oid":"1.3.6.1.4.1.50495.1.2.1.8.4","severity":"warning","type":"ceph_default"},"annotations":{"description":"Mountpoint / on vm06 will be full in less than 5 days based on the 48 hour trailing fill rate.","summary":"Host filesystem free space is getting low"},"state":"firing","activeAt":"2026-03-10T05:58:16.94650067Z","value":"-1.6106271607013496e+10"},{"labels":{"alertname":"CephNodeDiskspaceWarning","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","device":"/dev/vda1","fstype":"xfs","instance":"vm04","job":"node","mountpoint":"/","nodename":"vm04","oid":"1.3.6.1.4.1.50495.1.2.1.8.4","severity":"warning","type":"ceph_default"},"annotations":{"description":"Mountpoint / on vm04 will be full in less than 5 days based on the 48 hour trailing fill rate.","summary":"Host filesystem free space is getting low"},"state":"firing","activeAt":"2026-03-10T05:57:26.94650067Z","value":"-2.693191943013739e+10"}]}}true 2026-03-10T06:00:30.857 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.108:9093/api/v2/status 2026-03-10T06:00:30.861 INFO:teuthology.orchestra.run.vm04.stdout:{"cluster":{"name":"01KKB508D72QFC37H3P42W2DSJ","peers":[{"address":"192.168.123.108:9094","name":"01KKB508D72QFC37H3P42W2DSJ"}],"status":"ready"},"config":{"original":"global:\n resolve_timeout: 5m\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n smtp_hello: localhost\n smtp_require_tls: true\n pagerduty_url: https://events.pagerduty.com/v2/enqueue\n opsgenie_api_url: https://api.opsgenie.com/\n wechat_api_url: https://qyapi.weixin.qq.com/cgi-bin/\n victorops_api_url: https://alert.victorops.com/integrations/generic/20131114/alert/\n telegram_api_url: https://api.telegram.org\n webex_api_url: https://webexapis.com/v1/messages\nroute:\n receiver: default\n continue: false\n routes:\n - receiver: ceph-dashboard\n group_by:\n - alertname\n continue: false\n group_wait: 10s\n group_interval: 10s\n repeat_interval: 1h\nreceivers:\n- name: default\n- name: ceph-dashboard\n webhook_configs:\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm04.local:8443/api/prometheus_receiver\n max_alerts: 0\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm06.local:8443/api/prometheus_receiver\n max_alerts: 0\ntemplates: []\n"},"uptime":"2026-03-10T05:55:01.671Z","versionInfo":{"branch":"HEAD","buildDate":"20221222-14:51:36","buildUser":"root@abe866dd5717","goVersion":"go1.19.4","revision":"258fab7cdd551f2cf251ed0348f0ad7289aee789","version":"0.25.0"}} 2026-03-10T06:00:30.861 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-10T06:00:30.864 INFO:teuthology.orchestra.run.vm04.stdout:[{"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"endsAt":"2026-03-10T06:03:32.639Z","fingerprint":"35f52afe107f2bd8","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T05:59:32.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T05:59:32.641Z","generatorURL":"http://vm06.local:9095/graph?g0.expr=count%28ceph_mon_quorum_status+%3D%3D+0%29+%3C%3D+%28count%28ceph_mon_metadata%29+-+floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29\u0026g0.tab=1","labels":{"alertname":"CephMonDown","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","severity":"warning","type":"ceph_default"}},{"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"endsAt":"2026-03-10T06:03:32.639Z","fingerprint":"44799154e61a7aef","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T05:59:32.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T05:59:32.641Z","generatorURL":"http://vm06.local:9095/graph?g0.expr=%28%28ceph_health_detail%7Bname%3D%22MON_DOWN%22%7D+%3D%3D+1%29+%2A+on+%28%29+%28count%28ceph_mon_quorum_status+%3D%3D+1%29+%3D%3D+bool+%28floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29%29%29+%3D%3D+1\u0026g0.tab=1","labels":{"alertname":"CephMonDownQuorumAtRisk","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"}},{"annotations":{"description":"Mountpoint / on vm04 will be full in less than 5 days based on the 48 hour trailing fill rate.","summary":"Host filesystem free space is getting low"},"endsAt":"2026-03-10T06:03:46.946Z","fingerprint":"82bec5b51a011dfc","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T05:57:26.946Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T05:59:46.950Z","generatorURL":"http://vm06.local:9095/graph?g0.expr=predict_linear%28node_filesystem_free_bytes%7Bdevice%3D~%22%2F.%2A%22%7D%5B2d%5D%2C+3600+%2A+24+%2A+5%29+%2A+on+%28instance%29+group_left+%28nodename%29+node_uname_info+%3C+0\u0026g0.tab=1","labels":{"alertname":"CephNodeDiskspaceWarning","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","device":"/dev/vda1","fstype":"xfs","instance":"vm04","job":"node","mountpoint":"/","nodename":"vm04","oid":"1.3.6.1.4.1.50495.1.2.1.8.4","severity":"warning","type":"ceph_default"}},{"annotations":{"description":"Mountpoint / on vm06 will be full in less than 5 days based on the 48 hour trailing fill rate.","summary":"Host filesystem free space is getting low"},"endsAt":"2026-03-10T06:03:26.946Z","fingerprint":"e790e82d42ed8884","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T05:58:16.946Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T05:59:26.948Z","generatorURL":"http://vm06.local:9095/graph?g0.expr=predict_linear%28node_filesystem_free_bytes%7Bdevice%3D~%22%2F.%2A%22%7D%5B2d%5D%2C+3600+%2A+24+%2A+5%29+%2A+on+%28instance%29+group_left+%28nodename%29+node_uname_info+%3C+0\u0026g0.tab=1","labels":{"alertname":"CephNodeDiskspaceWarning","cluster":"2a12cf18-1c45-11f1-9f2e-3f4ab8754027","device":"/dev/vda1","fstype":"xfs","instance":"vm06","job":"node","mountpoint":"/","nodename":"vm06","oid":"1.3.6.1.4.1.50495.1.2.1.8.4","severity":"warning","type":"ceph_default"}}] 2026-03-10T06:00:30.864 INFO:teuthology.orchestra.run.vm04.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-10T06:00:30.864 INFO:teuthology.orchestra.run.vm04.stderr:+ jq -e '.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"' 2026-03-10T06:00:30.867 INFO:teuthology.orchestra.run.vm04.stdout:true 2026-03-10T06:00:31.003 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:30 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a[51154]: ::ffff:192.168.123.106 - - [10/Mar/2026:06:00:30] "GET /metrics HTTP/1.1" 200 21390 "" "Prometheus/2.51.0" 2026-03-10T06:00:31.033 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T06:00:31.035 INFO:tasks.cephadm:Teardown begin 2026-03-10T06:00:31.035 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:31.061 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:31.086 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:31.112 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T06:00:31.112 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 -- ceph mgr module disable cephadm 2026-03-10T06:00:31.273 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/mon.a/config 2026-03-10T06:00:31.288 INFO:teuthology.orchestra.run.vm04.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T06:00:31.307 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T06:00:31.307 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T06:00:31.307 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:00:31.321 DEBUG:teuthology.orchestra.run.vm06:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:00:31.338 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T06:00:31.351 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T06:00:31.351 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T06:00:31.352 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a 2026-03-10T06:00:31.556 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:31 vm04 systemd[1]: Stopping Ceph mon.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:31.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:31 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a[50895]: 2026-03-10T06:00:31.461+0000 7f5db74ff640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:00:31.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:31 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a[50895]: 2026-03-10T06:00:31.461+0000 7f5db74ff640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T06:00:31.557 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 06:00:31 vm04 podman[72993]: 2026-03-10 06:00:31.537034278 +0000 UTC m=+0.092428426 container died f5dff0ec46a71567a55c908e4128afc7401333a71b37efe4d5b71127725a0e65 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-a, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T06:00:31.713 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.a.service' 2026-03-10T06:00:31.747 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:31.747 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T06:00:31.747 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-10T06:00:31.747 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.b 2026-03-10T06:00:32.041 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:31 vm06 systemd[1]: Stopping Ceph mon.b for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:32.041 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:31 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-b[56683]: 2026-03-10T06:00:31.854+0000 7f9b0f1d5640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:00:32.041 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:31 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-b[56683]: 2026-03-10T06:00:31.854+0000 7f9b0f1d5640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T06:00:32.041 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 06:00:31 vm06 podman[66661]: 2026-03-10 06:00:31.929636474 +0000 UTC m=+0.090069708 container died fa39630c74b7e253fb4f33a31bc6278d17e095ac9afc8eaa0f24ebd9ea59259d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mon-b, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS) 2026-03-10T06:00:32.118 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.b.service' 2026-03-10T06:00:32.151 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:32.151 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-10T06:00:32.151 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-10T06:00:32.151 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.c 2026-03-10T06:00:32.185 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mon.c.service' 2026-03-10T06:00:32.260 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:32.260 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-10T06:00:32.260 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T06:00:32.261 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a 2026-03-10T06:00:32.534 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:32 vm04 systemd[1]: Stopping Ceph mgr.a for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:32.534 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:32 vm04 podman[73109]: 2026-03-10 06:00:32.398218848 +0000 UTC m=+0.053772278 container died 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid) 2026-03-10T06:00:32.534 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:32 vm04 podman[73109]: 2026-03-10 06:00:32.521942523 +0000 UTC m=+0.177495953 container remove 6600e72718731ee3728e2e2ef48301fc6920045a83fd905559a8108f725cf448 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T06:00:32.534 INFO:journalctl@ceph.mgr.a.vm04.stdout:Mar 10 06:00:32 vm04 bash[73109]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-a 2026-03-10T06:00:32.583 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.a.service' 2026-03-10T06:00:32.612 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:32.612 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T06:00:32.612 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T06:00:32.612 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.b 2026-03-10T06:00:32.647 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 06:00:32 vm06 systemd[1]: Stopping Ceph mgr.b for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:32.924 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 06:00:32 vm06 podman[66778]: 2026-03-10 06:00:32.750370479 +0000 UTC m=+0.052125722 container died c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T06:00:32.924 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 06:00:32 vm06 podman[66778]: 2026-03-10 06:00:32.872101803 +0000 UTC m=+0.173857037 container remove c2d1e5eb8ed7a2cb58ec576691745eae6377f4b77013348765974c3740adc79a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T06:00:32.924 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 06:00:32 vm06 bash[66778]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-mgr-b 2026-03-10T06:00:32.924 INFO:journalctl@ceph.mgr.b.vm06.stdout:Mar 10 06:00:32 vm06 systemd[1]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T06:00:32.933 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@mgr.b.service' 2026-03-10T06:00:33.003 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:33.003 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T06:00:33.003 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T06:00:33.004 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.0 2026-03-10T06:00:33.306 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:33 vm04 systemd[1]: Stopping Ceph osd.0 for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:33.306 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:33 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T06:00:33.094+0000 7efed6fd6640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:00:33.306 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:33 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T06:00:33.094+0000 7efed6fd6640 -1 osd.0 23 *** Got signal Terminated *** 2026-03-10T06:00:33.306 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:33 vm04 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0[63721]: 2026-03-10T06:00:33.094+0000 7efed6fd6640 -1 osd.0 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:00:38.397 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73223]: 2026-03-10 06:00:38.12652588 +0000 UTC m=+5.044866399 container died cea0e1ddd5e1a97767453e620c1b9310994f85d11347ab44879d4ca039740bc7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T06:00:38.397 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73223]: 2026-03-10 06:00:38.251857477 +0000 UTC m=+5.170198005 container remove cea0e1ddd5e1a97767453e620c1b9310994f85d11347ab44879d4ca039740bc7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T06:00:38.397 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 bash[73223]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0 2026-03-10T06:00:38.397 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.374485931 +0000 UTC m=+0.014684753 container create 18a3056f41526121a59c05239477e8eede1f8676d1d1e953f4b3e70802012ef9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0-deactivate, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T06:00:38.649 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.407597381 +0000 UTC m=+0.047796214 container init 18a3056f41526121a59c05239477e8eede1f8676d1d1e953f4b3e70802012ef9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T06:00:38.649 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.411960558 +0000 UTC m=+0.052159370 container start 18a3056f41526121a59c05239477e8eede1f8676d1d1e953f4b3e70802012ef9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-10T06:00:38.649 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.416669251 +0000 UTC m=+0.056868073 container attach 18a3056f41526121a59c05239477e8eede1f8676d1d1e953f4b3e70802012ef9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0-deactivate, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-10T06:00:38.649 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.368865293 +0000 UTC m=+0.009064115 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:00:38.649 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 06:00:38 vm04 podman[73299]: 2026-03-10 06:00:38.533614959 +0000 UTC m=+0.173813771 container died 18a3056f41526121a59c05239477e8eede1f8676d1d1e953f4b3e70802012ef9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-10T06:00:38.665 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.0.service' 2026-03-10T06:00:38.694 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:38.694 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T06:00:38.694 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T06:00:38.694 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.1 2026-03-10T06:00:39.138 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:38 vm06 systemd[1]: Stopping Ceph osd.1 for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:39.139 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:38 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1[60612]: 2026-03-10T06:00:38.787+0000 7f5039766640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:00:39.139 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:38 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1[60612]: 2026-03-10T06:00:38.787+0000 7f5039766640 -1 osd.1 23 *** Got signal Terminated *** 2026-03-10T06:00:39.139 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:38 vm06 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1[60612]: 2026-03-10T06:00:38.787+0000 7f5039766640 -1 osd.1 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:00:44.079 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:43 vm06 podman[66890]: 2026-03-10 06:00:43.811222339 +0000 UTC m=+5.037923602 container died 37f6423bc9ffac21fca88c948f72fb7c2dc4f3a358f019f64ab91d2c21beb2e8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T06:00:44.079 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:43 vm06 podman[66890]: 2026-03-10 06:00:43.940123624 +0000 UTC m=+5.166824887 container remove 37f6423bc9ffac21fca88c948f72fb7c2dc4f3a358f019f64ab91d2c21beb2e8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.license=GPLv2) 2026-03-10T06:00:44.079 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:43 vm06 bash[66890]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.078534597 +0000 UTC m=+0.015273376 container create 71106499d4882e83edc207febef531c18245941eebecabc7bb8d0746ebeb9df2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.111451977 +0000 UTC m=+0.048190765 container init 71106499d4882e83edc207febef531c18245941eebecabc7bb8d0746ebeb9df2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.115465327 +0000 UTC m=+0.052204096 container start 71106499d4882e83edc207febef531c18245941eebecabc7bb8d0746ebeb9df2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default) 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.119828934 +0000 UTC m=+0.056567723 container attach 71106499d4882e83edc207febef531c18245941eebecabc7bb8d0746ebeb9df2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.072531391 +0000 UTC m=+0.009270170 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:00:44.389 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 10 06:00:44 vm06 podman[66967]: 2026-03-10 06:00:44.242095581 +0000 UTC m=+0.178834360 container died 71106499d4882e83edc207febef531c18245941eebecabc7bb8d0746ebeb9df2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-10T06:00:44.510 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.1.service' 2026-03-10T06:00:44.540 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:44.540 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T06:00:44.540 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T06:00:44.540 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.2 2026-03-10T06:00:45.055 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:44 vm08 systemd[1]: Stopping Ceph osd.2 for 2a12cf18-1c45-11f1-9f2e-3f4ab8754027... 2026-03-10T06:00:45.055 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:44 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T06:00:44.639+0000 7fb51b916640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T06:00:45.055 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:44 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T06:00:44.639+0000 7fb51b916640 -1 osd.2 23 *** Got signal Terminated *** 2026-03-10T06:00:45.055 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:44 vm08 ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2[57240]: 2026-03-10T06:00:44.639+0000 7fb51b916640 -1 osd.2 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T06:00:49.994 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:49 vm08 podman[64225]: 2026-03-10 06:00:49.671094639 +0000 UTC m=+5.045363022 container died 12584453ec00873e94cbb2666020bbc123b3558b58e6231f4ae4b87b865ef8a2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) 2026-03-10T06:00:50.264 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64225]: 2026-03-10 06:00:50.014280541 +0000 UTC m=+5.388548933 container remove 12584453ec00873e94cbb2666020bbc123b3558b58e6231f4ae4b87b865ef8a2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T06:00:50.265 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 bash[64225]: ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2 2026-03-10T06:00:50.555 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64305]: 2026-03-10 06:00:50.165019184 +0000 UTC m=+0.009352994 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T06:00:50.555 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64305]: 2026-03-10 06:00:50.351888596 +0000 UTC m=+0.196222397 container create 9342cf2d9726a6dba0fc31f58569a8d2bd3377ffcfac828636895eaacdd87bcd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T06:00:50.555 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64305]: 2026-03-10 06:00:50.473581176 +0000 UTC m=+0.317914977 container init 9342cf2d9726a6dba0fc31f58569a8d2bd3377ffcfac828636895eaacdd87bcd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2-deactivate, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T06:00:50.555 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64305]: 2026-03-10 06:00:50.477614314 +0000 UTC m=+0.321948115 container start 9342cf2d9726a6dba0fc31f58569a8d2bd3377ffcfac828636895eaacdd87bcd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2-deactivate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T06:00:50.555 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 06:00:50 vm08 podman[64305]: 2026-03-10 06:00:50.511046404 +0000 UTC m=+0.355380195 container attach 9342cf2d9726a6dba0fc31f58569a8d2bd3377ffcfac828636895eaacdd87bcd (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027-osd-2-deactivate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T06:00:50.766 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-2a12cf18-1c45-11f1-9f2e-3f4ab8754027@osd.2.service' 2026-03-10T06:00:50.796 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T06:00:50.796 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T06:00:50.796 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force --keep-logs 2026-03-10T06:00:50.918 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:52.637 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force --keep-logs 2026-03-10T06:00:52.757 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:54.606 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force --keep-logs 2026-03-10T06:00:54.727 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:56.266 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:56.294 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:56.320 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:56.346 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T06:00:56.346 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm04/crash 2026-03-10T06:00:56.346 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash -- . 2026-03-10T06:00:56.370 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash: Cannot open: No such file or directory 2026-03-10T06:00:56.370 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:00:56.371 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm06/crash 2026-03-10T06:00:56.371 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash -- . 2026-03-10T06:00:56.398 INFO:teuthology.orchestra.run.vm06.stderr:tar: /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash: Cannot open: No such file or directory 2026-03-10T06:00:56.398 INFO:teuthology.orchestra.run.vm06.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:00:56.399 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm08/crash 2026-03-10T06:00:56.399 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash -- . 2026-03-10T06:00:56.423 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/crash: Cannot open: No such file or directory 2026-03-10T06:00:56.423 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:00:56.424 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T06:00:56.424 DEBUG:teuthology.orchestra.run.vm04:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | egrep -v 'mons down' | egrep -v 'mon down' | egrep -v 'out of quorum' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T06:00:56.452 INFO:tasks.cephadm:Compressing logs... 2026-03-10T06:00:56.452 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:56.494 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:56.495 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:56.517 INFO:teuthology.orchestra.run.vm04.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:00:56.517 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:00:56.518 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.a.log 2026-03-10T06:00:56.518 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log 2026-03-10T06:00:56.520 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:00:56.520 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:00:56.520 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log 2026-03-10T06:00:56.521 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.c.log 2026-03-10T06:00:56.521 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log: 89.0% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:00:56.521 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log 2026-03-10T06:00:56.522 INFO:teuthology.orchestra.run.vm06.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:00:56.523 INFO:teuthology.orchestra.run.vm06.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:00:56.523 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.c.log: 94.7% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log.gz 2026-03-10T06:00:56.523 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.a.log: /var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.a.log 2026-03-10T06:00:56.524 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log 2026-03-10T06:00:56.524 INFO:teuthology.orchestra.run.vm06.stderr: 82.8% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:00:56.524 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.b.log 2026-03-10T06:00:56.525 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log 2026-03-10T06:00:56.526 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log: 87.8% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log.gz 2026-03-10T06:00:56.526 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log: /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.b.log: 94.8% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log.gz 2026-03-10T06:00:56.527 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log 2026-03-10T06:00:56.527 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log: 79.8% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log.gz 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm04.stderr: 93.0% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log: 80.0% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log.gz 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log 2026-03-10T06:00:56.528 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log: 90.1% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log.gz 2026-03-10T06:00:56.529 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.b.log 2026-03-10T06:00:56.529 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log: 90.1% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log.gz 2026-03-10T06:00:56.529 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.2.log 2026-03-10T06:00:56.529 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log: 88.0% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log.gz 2026-03-10T06:00:56.530 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log: 87.5% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.log.gz 2026-03-10T06:00:56.530 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.1.log 2026-03-10T06:00:56.531 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log 2026-03-10T06:00:56.534 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.b.log: 90.7% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.b.log.gz 2026-03-10T06:00:56.536 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log 2026-03-10T06:00:56.537 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log: 89.9% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.audit.log.gz 2026-03-10T06:00:56.544 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log 2026-03-10T06:00:56.544 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log: 82.5% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph.cephadm.log.gz 2026-03-10T06:00:56.551 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.2.log: 93.2% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.2.log.gz 2026-03-10T06:00:56.555 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.0.log 2026-03-10T06:00:56.557 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log: 94.8% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-volume.log.gz 2026-03-10T06:00:56.560 INFO:teuthology.orchestra.run.vm08.stderr: 93.0% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.c.log.gz 2026-03-10T06:00:56.562 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T06:00:56.562 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.053s 2026-03-10T06:00:56.562 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.065s 2026-03-10T06:00:56.562 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.018s 2026-03-10T06:00:56.565 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.1.log: 93.4% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.1.log.gz 2026-03-10T06:00:56.588 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.0.log: 90.8% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mgr.a.log.gz 2026-03-10T06:00:56.595 INFO:teuthology.orchestra.run.vm04.stderr: 93.2% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-osd.0.log.gz 2026-03-10T06:00:56.595 INFO:teuthology.orchestra.run.vm06.stderr: 92.6% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.b.log.gz 2026-03-10T06:00:56.597 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-10T06:00:56.597 INFO:teuthology.orchestra.run.vm06.stderr:real 0m0.087s 2026-03-10T06:00:56.597 INFO:teuthology.orchestra.run.vm06.stderr:user 0m0.105s 2026-03-10T06:00:56.597 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.025s 2026-03-10T06:00:56.682 INFO:teuthology.orchestra.run.vm04.stderr: 91.2% -- replaced with /var/log/ceph/2a12cf18-1c45-11f1-9f2e-3f4ab8754027/ceph-mon.a.log.gz 2026-03-10T06:00:56.684 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-10T06:00:56.684 INFO:teuthology.orchestra.run.vm04.stderr:real 0m0.176s 2026-03-10T06:00:56.684 INFO:teuthology.orchestra.run.vm04.stderr:user 0m0.229s 2026-03-10T06:00:56.684 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.025s 2026-03-10T06:00:56.684 INFO:tasks.cephadm:Archiving logs... 2026-03-10T06:00:56.684 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm04/log 2026-03-10T06:00:56.684 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:00:56.762 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm06/log 2026-03-10T06:00:56.763 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:00:56.792 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm08/log 2026-03-10T06:00:56.792 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:00:56.823 INFO:tasks.cephadm:Removing cluster... 2026-03-10T06:00:56.823 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force 2026-03-10T06:00:56.949 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:57.166 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force 2026-03-10T06:00:57.290 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:57.491 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 --force 2026-03-10T06:00:57.615 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 2a12cf18-1c45-11f1-9f2e-3f4ab8754027 2026-03-10T06:00:57.823 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T06:00:57.823 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:00:57.838 DEBUG:teuthology.orchestra.run.vm06:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:00:57.853 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T06:00:57.867 INFO:tasks.cephadm:Teardown complete 2026-03-10T06:00:57.867 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T06:00:57.869 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T06:00:57.869 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:00:57.880 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:00:57.895 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T06:00:57.938 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:00:57.938 DEBUG:teuthology.orchestra.run.vm04:> 2026-03-10T06:00:57.938 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:00:57.938 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y remove $d || true 2026-03-10T06:00:57.938 DEBUG:teuthology.orchestra.run.vm04:> done 2026-03-10T06:00:57.943 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:00:57.943 DEBUG:teuthology.orchestra.run.vm06:> 2026-03-10T06:00:57.943 DEBUG:teuthology.orchestra.run.vm06:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:00:57.943 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y remove $d || true 2026-03-10T06:00:57.943 DEBUG:teuthology.orchestra.run.vm06:> done 2026-03-10T06:00:57.948 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T06:00:57.948 DEBUG:teuthology.orchestra.run.vm08:> 2026-03-10T06:00:57.948 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T06:00:57.948 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y remove $d || true 2026-03-10T06:00:57.948 DEBUG:teuthology.orchestra.run.vm08:> done 2026-03-10T06:00:58.118 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 39 M 2026-03-10T06:00:58.119 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:00:58.121 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:00:58.121 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:00:58.134 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:00:58.135 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:00:58.139 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Remove 2 Packages 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 39 M 2026-03-10T06:00:58.140 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:00:58.146 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:00:58.146 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.156 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 39 M 2026-03-10T06:00:58.157 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:00:58.158 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:00:58.159 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:00:58.159 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:00:58.159 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:00:58.168 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:00:58.174 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:00:58.174 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.190 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.191 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:00:58.193 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.205 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.213 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.217 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.227 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.227 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:58.228 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:00:58.228 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.228 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T06:00:58.228 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.230 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.264 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.286 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.288 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.302 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.303 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.305 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.380 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.380 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.393 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.393 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.395 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.395 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:00:58.526 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.527 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.527 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:00:58.527 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:00:58.527 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.527 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.532 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.536 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:00:58.733 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Remove 4 Packages 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 212 M 2026-03-10T06:00:58.734 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Remove 4 Packages 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:58.737 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 212 M 2026-03-10T06:00:58.738 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:00:58.738 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:00:58.738 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:00:58.741 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:00:58.741 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:00:58.758 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:58.759 INFO:teuthology.orchestra.run.vm06.stdout:Remove 4 Packages 2026-03-10T06:00:58.759 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.759 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 212 M 2026-03-10T06:00:58.759 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:00:58.761 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:00:58.761 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:00:58.763 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:00:58.763 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:00:58.764 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:00:58.765 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:00:58.785 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:00:58.786 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:00:58.825 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:00:58.827 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:00:58.831 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.833 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.833 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:00:58.835 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:00:58.837 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:00:58.838 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:00:58.848 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:00:58.853 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.854 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.854 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.855 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T06:00:58.858 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T06:00:58.873 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.932 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.932 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.932 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:00:58.932 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:00:58.936 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.936 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.936 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:00:58.936 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:00:58.944 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T06:00:58.944 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T06:00:58.944 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T06:00:58.944 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:58.994 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:58.998 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.001 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:00:59.217 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:00:59.217 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:59.217 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Remove 8 Packages 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 28 M 2026-03-10T06:00:59.218 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:00:59.219 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Remove 8 Packages 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 28 M 2026-03-10T06:00:59.220 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:00:59.221 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:00:59.221 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:00:59.223 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:00:59.223 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:00:59.223 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:00:59.224 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:59.224 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.224 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Remove 8 Packages 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 28 M 2026-03-10T06:00:59.225 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:00:59.228 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:00:59.228 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:00:59.246 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:00:59.247 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:00:59.249 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:00:59.249 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:00:59.253 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:00:59.254 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:00:59.288 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:00:59.292 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:00:59.294 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.296 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:00:59.297 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:00:59.298 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.299 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:00:59.301 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.302 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:00:59.303 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:00:59.305 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:00:59.306 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:00:59.307 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T06:00:59.308 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:00:59.308 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T06:00:59.308 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:00:59.311 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:00:59.311 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T06:00:59.313 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:00:59.314 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T06:00:59.316 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.330 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:00:59.331 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.332 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T06:00:59.338 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.339 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.339 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.347 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:00:59.358 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:00:59.360 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.362 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.369 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.369 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:00:59.370 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:00:59.370 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T06:00:59.370 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T06:00:59.370 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.371 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:00:59.455 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:00:59.462 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.462 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.462 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:00:59.462 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:00:59.463 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:00:59.463 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:00:59.463 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:00:59.463 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T06:00:59.471 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.514 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.516 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:00:59.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T06:00:59.526 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.526 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:00:59.526 INFO:teuthology.orchestra.run.vm06.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: zip-3.0-35.el9.x86_64 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.527 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:00:59.728 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:00:59.729 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:00:59.733 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:00:59.734 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout:Remove 102 Packages 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:00:59.735 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:00:59.736 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout:Remove 102 Packages 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 613 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 613 M 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:00:59.737 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:00:59.744 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:00:59.750 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T06:00:59.750 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:00:59.750 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T06:00:59.750 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T06:00:59.751 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout:Remove 102 Packages 2026-03-10T06:00:59.752 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:00:59.753 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 613 M 2026-03-10T06:00:59.753 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:00:59.762 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:00:59.762 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:00:59.763 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:00:59.763 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:00:59.779 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:00:59.779 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:00:59.870 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:00:59.870 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:00:59.871 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:00:59.871 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:00:59.886 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:00:59.886 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:00.016 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:00.016 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.022 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:00.022 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.023 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.030 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:00.030 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.031 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.037 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.039 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.040 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.055 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.056 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.058 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T06:01:00.060 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.061 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.071 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.074 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:00.079 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:01:00.079 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.094 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:01:00.094 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.097 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T06:01:00.097 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.136 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.145 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:01:00.150 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:01:00.150 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.153 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.153 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T06:01:00.161 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.163 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:01:00.164 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T06:01:00.167 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:01:00.167 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.168 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:01:00.169 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T06:01:00.169 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.172 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:01:00.178 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.180 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:01:00.181 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:00.184 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:01:00.184 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:01:00.188 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T06:01:00.189 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:01:00.193 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T06:01:00.197 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:01:00.201 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T06:01:00.201 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.205 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T06:01:00.210 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.219 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.225 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.225 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.226 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:01:00.226 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:01:00.226 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:01:00.226 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T06:01:00.227 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.232 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.233 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.235 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.235 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.235 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:01:00.235 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.242 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.242 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:00.243 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.253 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.256 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:01:00.258 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.258 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.258 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:01:00.258 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.258 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.259 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.259 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:01:00.259 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.260 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:01:00.264 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:01:00.266 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.266 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.273 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:01:00.276 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.276 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T06:01:00.279 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:01:00.279 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T06:01:00.283 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:01:00.284 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T06:01:00.285 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:01:00.288 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:01:00.288 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T06:01:00.292 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:01:00.297 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:01:00.297 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T06:01:00.302 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:01:00.308 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:01:00.308 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:01:00.309 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T06:01:00.315 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:01:00.316 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T06:01:00.324 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:01:00.326 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T06:01:00.331 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:01:00.332 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T06:01:00.337 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:01:00.344 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:01:00.346 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:01:00.355 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:01:00.359 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:01:00.361 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T06:01:00.365 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:01:00.365 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.366 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:01:00.368 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T06:01:00.369 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:01:00.371 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T06:01:00.372 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.378 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:01:00.381 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T06:01:00.388 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:01:00.388 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.392 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T06:01:00.392 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.395 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.399 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T06:01:00.466 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:01:00.481 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:01:00.489 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:01:00.494 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T06:01:00.495 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.495 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:01:00.495 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.497 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.504 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:01:00.509 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T06:01:00.518 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.518 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:01:00.518 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.519 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.523 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.523 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.523 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T06:01:00.523 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.524 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.538 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:01:00.544 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:01:00.544 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.546 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:01:00.549 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:01:00.551 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T06:01:00.559 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:01:00.564 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:01:00.567 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T06:01:00.567 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.568 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.569 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:01:00.570 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.572 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T06:01:00.575 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T06:01:00.577 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T06:01:00.583 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.587 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:01:00.589 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:01:00.591 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.593 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:01:00.595 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.597 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T06:01:00.600 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.601 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:01:00.602 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.606 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:01:00.606 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.611 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:01:00.613 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:01:00.614 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T06:01:00.615 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:01:00.618 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T06:01:00.619 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:01:00.620 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T06:01:00.623 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T06:01:00.623 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:01:00.625 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T06:01:00.627 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:01:00.629 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T06:01:00.634 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:01:00.634 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T06:01:00.640 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T06:01:00.655 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:01:00.666 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:01:00.668 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:01:00.674 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:01:00.676 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:01:00.680 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:01:00.682 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:01:00.684 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:01:00.689 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T06:01:00.697 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:01:00.699 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:01:00.700 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T06:01:00.702 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.702 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.702 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:01:00.702 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.702 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.703 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T06:01:00.704 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:01:00.706 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:01:00.708 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T06:01:00.710 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:01:00.711 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T06:01:00.712 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.712 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:01:00.714 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:01:00.714 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T06:01:00.716 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:01:00.717 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T06:01:00.719 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:01:00.721 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:01:00.723 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:01:00.726 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:01:00.728 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:01:00.731 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:01:00.735 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.735 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.735 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:01:00.735 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.735 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.738 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:01:00.738 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.738 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:01:00.738 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:01:00.738 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.739 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.742 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:01:00.743 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.744 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:01:00.745 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:01:00.747 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:01:00.747 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:01:00.748 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T06:01:00.749 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T06:01:00.750 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:01:00.750 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:01:00.751 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T06:01:00.752 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:01:00.754 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T06:01:00.754 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:01:00.755 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:01:00.757 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T06:01:00.757 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:01:00.759 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T06:01:00.759 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:01:00.760 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:01:00.761 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T06:01:00.762 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:01:00.764 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T06:01:00.765 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:01:00.767 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T06:01:00.768 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:01:00.770 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:01:00.774 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:01:00.774 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:01:00.775 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T06:01:00.776 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:01:00.777 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:01:00.779 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:01:00.779 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T06:01:00.780 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:01:00.781 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T06:01:00.781 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:01:00.783 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:01:00.783 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T06:01:00.786 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T06:01:00.786 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:01:00.788 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:01:00.790 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:01:00.791 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T06:01:00.791 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:01:00.795 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:01:00.795 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T06:01:00.795 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:01:00.799 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:01:00.800 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T06:01:00.803 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:01:00.804 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T06:01:00.805 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:01:00.808 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:01:00.808 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:01:00.810 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T06:01:00.811 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:01:00.811 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:01:00.813 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T06:01:00.813 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:01:00.814 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:01:00.815 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:01:00.817 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T06:01:00.819 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T06:01:00.819 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:01:00.820 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:01:00.823 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:01:00.824 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:01:00.825 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T06:01:00.826 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:01:00.828 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T06:01:00.832 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T06:01:00.835 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:01:00.840 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:01:00.840 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T06:01:00.842 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.842 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:01:00.842 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:00.843 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:01:00.845 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T06:01:00.845 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:01:00.847 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:01:00.848 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T06:01:00.850 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.850 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T06:01:00.852 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T06:01:00.853 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:01:00.856 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:01:00.857 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T06:01:00.861 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T06:01:00.876 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.876 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:01:00.876 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:00.878 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.878 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:00.880 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.881 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T06:01:00.881 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:00.884 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.888 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:00.932 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:00.968 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:01:00.991 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:01:01.012 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:01.012 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:01.028 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:01:01.028 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:01.037 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T06:01:01.037 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:01.042 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:01.078 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:01:01.078 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T06:01:01.110 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T06:01:01.118 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:01:01.125 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T06:01:01.126 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:01:01.126 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:01.183 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T06:01:01.183 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-10T06:01:06.513 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:06.532 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:01:06.550 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.550 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.562 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.566 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:01:06.570 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:01:06.576 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:01:06.579 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:01:06.579 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.596 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.598 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:01:06.601 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:01:06.604 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:01:06.606 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:01:06.611 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:01:06.619 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:01:06.623 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:01:06.624 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T06:01:06.672 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /sys 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /proc 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /mnt 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /var/tmp 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /home 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /root 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /tmp 2026-03-10T06:01:06.680 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:06.681 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:01:06.689 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T06:01:06.697 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.697 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.705 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.706 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.706 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.708 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:01:06.710 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:01:06.713 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:01:06.714 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T06:01:06.715 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:01:06.715 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.717 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T06:01:06.719 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:01:06.721 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:01:06.722 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T06:01:06.723 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:01:06.724 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:01:06.725 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:01:06.729 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.731 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:01:06.733 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:01:06.736 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:01:06.737 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T06:01:06.739 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T06:01:06.740 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:01:06.742 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T06:01:06.746 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T06:01:06.746 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:01:06.749 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T06:01:06.754 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T06:01:06.756 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:01:06.761 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:01:06.761 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.761 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T06:01:06.766 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T06:01:06.766 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.804 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:01:06.805 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:06.806 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:01:06.857 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:01:06.858 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:01:06.859 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:01:06.860 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:01:06.861 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:01:06.862 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T06:01:06.881 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T06:01:06.882 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T06:01:06.883 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T06:01:06.884 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T06:01:06.885 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.941 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:01:06.942 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:01:06.943 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:01:06.944 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:01:06.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.946 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:01:06.947 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.947 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:06.947 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.963 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T06:01:06.964 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:06.965 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:01:07.014 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 775 k 2026-03-10T06:01:07.015 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:01:07.016 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:01:07.016 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:01:07.018 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:01:07.018 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:01:07.033 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:07.033 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.136 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.144 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 775 k 2026-03-10T06:01:07.145 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:01:07.146 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:01:07.147 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:01:07.148 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:01:07.148 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:01:07.165 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:07.166 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.167 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:Remove 1 Package 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 775 k 2026-03-10T06:01:07.168 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:01:07.170 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:01:07.170 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:01:07.171 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:07.172 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:07.188 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:07.188 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.274 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.296 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:07.317 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:07.345 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:07.360 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:01:07.360 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:07.364 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:07.364 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:07.364 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:07.511 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:01:07.511 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:07.514 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:07.515 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:07.515 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:07.527 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T06:01:07.528 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:07.530 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:07.531 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:07.531 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:07.535 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr 2026-03-10T06:01:07.535 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:07.539 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:07.539 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:07.539 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:07.693 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr 2026-03-10T06:01:07.693 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:07.697 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:07.697 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:07.697 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:07.703 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr 2026-03-10T06:01:07.703 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:07.706 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:07.707 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:07.707 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:07.709 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:01:07.709 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:07.712 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:07.713 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:07.713 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:07.866 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:01:07.866 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:07.869 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:07.870 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:07.870 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:07.877 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T06:01:07.877 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:07.880 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:07.881 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:07.881 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:07.886 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:01:07.886 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:07.889 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:07.890 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:07.890 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:08.046 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:01:08.046 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:08.049 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:08.050 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:08.050 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:08.055 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T06:01:08.055 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:08.059 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:08.059 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:08.059 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:08.071 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:01:08.072 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:08.075 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:08.076 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:08.076 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:08.232 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:01:08.232 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:08.236 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:08.236 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:08.236 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:08.243 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:01:08.243 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:08.246 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:08.247 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:08.247 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:08.255 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-rook 2026-03-10T06:01:08.255 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:08.258 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:08.259 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:08.259 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:08.405 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:01:08.405 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:08.408 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:08.409 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:08.409 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.6 M 2026-03-10T06:01:08.431 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:01:08.432 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T06:01:08.432 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:08.433 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:01:08.433 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:01:08.436 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:08.436 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:08.436 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:08.442 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:01:08.443 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:01:08.467 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:08.481 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.551 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.591 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Remove 1 Package 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 3.6 M 2026-03-10T06:01:08.604 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:01:08.606 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:01:08.606 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:01:08.619 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:01:08.619 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:08.633 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.6 M 2026-03-10T06:01:08.634 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:01:08.636 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:01:08.636 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:01:08.646 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:01:08.646 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:01:08.647 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:08.662 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.672 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:08.687 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.728 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.760 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.774 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.774 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:08.775 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:08.775 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:08.775 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:08.775 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:08.785 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-volume 2026-03-10T06:01:08.785 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:08.789 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:08.789 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:08.790 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:08.802 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:08.974 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-volume 2026-03-10T06:01:08.974 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:08.977 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:08.978 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:08.978 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:08.979 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 610 k 2026-03-10T06:01:08.980 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:01:08.981 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-volume 2026-03-10T06:01:08.981 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:08.982 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:01:08.982 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:01:08.985 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:08.985 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:08.985 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:08.993 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:01:08.993 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:01:09.019 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:09.022 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.035 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.104 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.104 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.155 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.156 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:09.167 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repo Size 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Remove 2 Packages 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 610 k 2026-03-10T06:01:09.168 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:01:09.170 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:01:09.170 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.179 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T06:01:09.180 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.180 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 610 k 2026-03-10T06:01:09.180 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:01:09.181 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:01:09.181 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:09.181 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:01:09.182 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:01:09.192 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:01:09.192 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:01:09.206 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:09.208 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.216 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:09.219 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.221 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.232 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.288 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.288 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.295 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.295 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.338 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.342 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:01:09.357 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:Remove 3 Packages 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.7 M 2026-03-10T06:01:09.358 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:01:09.360 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:01:09.360 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:01:09.376 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:01:09.376 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:01:09.407 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:09.410 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.412 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.412 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.471 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.471 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.471 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.510 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:09.540 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repo Size 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Remove 3 Packages 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 3.7 M 2026-03-10T06:01:09.541 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:01:09.543 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:01:09.543 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:01:09.543 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Remove 3 Packages 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.544 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.7 M 2026-03-10T06:01:09.545 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:01:09.546 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:01:09.546 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:01:09.560 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:01:09.560 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:09.563 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:01:09.563 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:01:09.590 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:09.592 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.594 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:09.594 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.594 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.596 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.597 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.597 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.658 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.659 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.659 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.660 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.660 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T06:01:09.660 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T06:01:09.687 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: libcephfs-devel 2026-03-10T06:01:09.687 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:09.690 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:09.691 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:09.691 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:09.698 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:09.867 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: libcephfs-devel 2026-03-10T06:01:09.867 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:09.867 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Remove 20 Packages 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 79 M 2026-03-10T06:01:09.869 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-10T06:01:09.870 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: libcephfs-devel 2026-03-10T06:01:09.870 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:09.871 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:09.871 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:09.871 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:09.873 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-10T06:01:09.873 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-10T06:01:09.873 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:09.874 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:09.874 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:09.895 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-10T06:01:09.895 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-10T06:01:09.936 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-10T06:01:09.939 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:01:09.941 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:01:09.944 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:01:09.944 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:09.958 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:09.960 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:01:09.962 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:01:09.963 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:09.964 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:01:09.967 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:01:09.967 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:09.981 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:09.981 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:09.981 INFO:teuthology.orchestra.run.vm04.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:01:09.981 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:09.995 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:09.997 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.001 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:01:10.005 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:01:10.008 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:01:10.010 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:01:10.012 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:01:10.014 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:01:10.016 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:01:10.029 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.050 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:10.051 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:10.051 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:01:10.051 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:10.051 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T06:01:10.051 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Remove 20 Packages 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 79 M 2026-03-10T06:01:10.052 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:01:10.056 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T06:01:10.057 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:Remove 20 Packages 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 79 M 2026-03-10T06:01:10.058 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T06:01:10.062 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T06:01:10.062 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T06:01:10.081 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:01:10.082 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:01:10.085 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T06:01:10.085 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T06:01:10.094 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:01:10.095 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:01:10.125 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:01:10.127 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:01:10.128 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T06:01:10.129 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:01:10.131 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T06:01:10.132 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:01:10.132 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:10.133 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T06:01:10.137 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T06:01:10.137 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:10.141 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:01:10.141 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:10.141 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-10T06:01:10.141 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:01:10.141 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T06:01:10.142 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:10.147 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:10.149 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:01:10.150 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T06:01:10.152 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:01:10.152 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T06:01:10.153 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:10.154 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T06:01:10.155 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:01:10.156 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:10.157 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:01:10.158 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.158 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T06:01:10.161 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T06:01:10.161 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.170 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.170 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:10.170 INFO:teuthology.orchestra.run.vm08.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:01:10.170 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:10.176 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.176 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:10.176 INFO:teuthology.orchestra.run.vm06.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T06:01:10.176 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:10.183 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:10.185 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.188 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:01:10.190 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T06:01:10.192 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:01:10.192 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.194 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:01:10.195 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T06:01:10.197 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:01:10.199 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:01:10.199 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T06:01:10.201 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:01:10.202 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T06:01:10.203 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:01:10.205 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T06:01:10.207 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T06:01:10.209 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T06:01:10.211 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T06:01:10.217 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.226 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.283 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.283 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T06:01:10.284 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T06:01:10.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.328 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T06:01:10.329 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:10.331 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:01:10.332 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:10.354 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: librbd1 2026-03-10T06:01:10.354 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:10.356 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:10.357 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:10.357 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:10.532 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: librbd1 2026-03-10T06:01:10.532 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:10.535 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:10.535 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:10.535 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:10.547 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: librbd1 2026-03-10T06:01:10.547 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:10.549 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:10.549 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rados 2026-03-10T06:01:10.549 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:10.549 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:10.549 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:10.551 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:10.552 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:10.552 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:10.731 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rgw 2026-03-10T06:01:10.731 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:10.733 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:10.734 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:10.734 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:10.737 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rados 2026-03-10T06:01:10.737 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:10.740 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:10.740 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rados 2026-03-10T06:01:10.740 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:10.741 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:10.741 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:10.743 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:10.743 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:10.743 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:10.897 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-cephfs 2026-03-10T06:01:10.897 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:10.900 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:10.900 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:10.900 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:10.912 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rgw 2026-03-10T06:01:10.912 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:10.913 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rgw 2026-03-10T06:01:10.913 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:10.914 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:10.915 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:10.915 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:10.915 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:10.916 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:10.916 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.065 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rbd 2026-03-10T06:01:11.065 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:11.068 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:11.068 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:11.068 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:11.083 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-cephfs 2026-03-10T06:01:11.083 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:11.085 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:11.086 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-cephfs 2026-03-10T06:01:11.086 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:11.086 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:11.086 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:11.088 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:11.089 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:11.089 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.233 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-fuse 2026-03-10T06:01:11.233 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:11.235 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:11.236 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:11.236 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:11.256 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rbd 2026-03-10T06:01:11.256 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:11.258 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rbd 2026-03-10T06:01:11.258 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:11.258 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:11.259 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:11.259 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:11.260 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:11.261 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:11.261 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.402 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-mirror 2026-03-10T06:01:11.402 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:11.405 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:11.405 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:11.405 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:11.421 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-fuse 2026-03-10T06:01:11.421 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:11.423 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:11.424 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:11.424 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:11.427 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-fuse 2026-03-10T06:01:11.427 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:11.429 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:11.430 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:11.430 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.566 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-nbd 2026-03-10T06:01:11.566 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-10T06:01:11.568 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-10T06:01:11.569 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-10T06:01:11.569 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-10T06:01:11.584 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-mirror 2026-03-10T06:01:11.584 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:11.586 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:11.587 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:11.587 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:11.590 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-mirror 2026-03-10T06:01:11.591 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:11.592 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-10T06:01:11.593 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:11.593 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:11.593 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.717 INFO:teuthology.orchestra.run.vm04.stdout:56 files removed 2026-03-10T06:01:11.739 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:11.752 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-nbd 2026-03-10T06:01:11.752 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T06:01:11.754 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T06:01:11.755 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T06:01:11.755 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T06:01:11.757 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-nbd 2026-03-10T06:01:11.757 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:01:11.759 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:01:11.760 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:01:11.760 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:01:11.763 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean expire-cache 2026-03-10T06:01:11.776 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-10T06:01:11.781 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T06:01:11.897 INFO:teuthology.orchestra.run.vm06.stdout:56 files removed 2026-03-10T06:01:11.911 INFO:teuthology.orchestra.run.vm08.stdout:56 files removed 2026-03-10T06:01:11.916 INFO:teuthology.orchestra.run.vm04.stdout:Cache was expired 2026-03-10T06:01:11.916 INFO:teuthology.orchestra.run.vm04.stdout:0 files removed 2026-03-10T06:01:11.919 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:11.935 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:11.936 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:11.942 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean expire-cache 2026-03-10T06:01:11.957 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean expire-cache 2026-03-10T06:01:12.088 INFO:teuthology.orchestra.run.vm06.stdout:Cache was expired 2026-03-10T06:01:12.088 INFO:teuthology.orchestra.run.vm06.stdout:0 files removed 2026-03-10T06:01:12.103 INFO:teuthology.orchestra.run.vm08.stdout:Cache was expired 2026-03-10T06:01:12.104 INFO:teuthology.orchestra.run.vm08.stdout:0 files removed 2026-03-10T06:01:12.106 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:12.121 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:12.121 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-10T06:01:12.121 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm06.local 2026-03-10T06:01:12.121 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-10T06:01:12.121 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:12.121 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:12.121 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T06:01:12.145 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:01:12.147 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:01:12.155 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:01:12.211 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:12.212 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:12.223 DEBUG:teuthology.parallel:result is None 2026-03-10T06:01:12.223 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T06:01:12.226 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T06:01:12.226 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:01:12.267 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:01:12.269 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:01:12.279 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:01:12.282 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:01:12.285 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:^+ vps-fra8.orleans.ddnss.de 2 6 377 20 -899us[ -949us] +/- 17ms 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:^+ 139-144-71-56.ip.linodeu> 2 6 377 19 +3629us[+3579us] +/- 30ms 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:^* static.236.223.13.49.cli> 3 6 377 17 -984us[-1034us] +/- 13ms 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm04.stdout:^+ bond1-1201.fsn-lf-s02.pr> 2 6 377 19 -1188us[-1237us] +/- 20ms 2026-03-10T06:01:12.477 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm08.stdout:^+ 139-144-71-56.ip.linodeu> 2 6 377 21 +3406us[+3420us] +/- 30ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm08.stdout:^* static.236.223.13.49.cli> 3 6 377 14 -1120us[-1118us] +/- 13ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm08.stdout:^+ bond1-1201.fsn-lf-s02.pr> 2 6 377 20 -1282us[-1267us] +/- 20ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm08.stdout:^+ vps-fra8.orleans.ddnss.de 2 6 377 19 +732us[ +746us] +/- 16ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:^+ 139-144-71-56.ip.linodeu> 2 6 377 20 +3483us[+3483us] +/- 30ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:^* static.236.223.13.49.cli> 3 6 377 21 -1011us[ -627us] +/- 13ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:^+ bond1-1201.fsn-lf-s02.pr> 2 6 377 20 -1196us[-1196us] +/- 20ms 2026-03-10T06:01:12.478 INFO:teuthology.orchestra.run.vm06.stdout:^+ vps-fra8.orleans.ddnss.de 2 6 377 20 -875us[ -875us] +/- 17ms 2026-03-10T06:01:12.479 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T06:01:12.481 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T06:01:12.481 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T06:01:12.483 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T06:01:12.485 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T06:01:12.487 INFO:teuthology.task.internal:Duration was 813.902697 seconds 2026-03-10T06:01:12.487 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T06:01:12.489 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T06:01:12.489 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:01:12.522 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:01:12.523 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:01:12.558 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:01:12.564 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:01:12.565 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:01:12.914 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T06:01:12.914 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-10T06:01:12.914 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:01:12.938 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-10T06:01:12.938 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:01:12.977 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T06:01:12.977 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:01:13.002 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T06:01:13.002 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.003 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.019 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.431 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T06:01:13.431 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:01:13.433 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:01:13.434 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:01:13.457 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:01:13.457 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:01:13.458 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:01:13.458 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.458 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:01:13.459 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:01:13.460 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:01:13.460 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:01:13.460 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.461 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:01:13.461 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:01:13.462 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:01:13.462 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:01:13.462 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:01:13.463 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:01:13.571 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:01:13.582 INFO:teuthology.orchestra.run.vm04.stderr: 98.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:01:13.602 INFO:teuthology.orchestra.run.vm06.stderr: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:01:13.604 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T06:01:13.607 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T06:01:13.607 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:01:13.652 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:01:13.675 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:01:13.703 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T06:01:13.706 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.707 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.718 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.733 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-10T06:01:13.744 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-10T06:01:13.772 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T06:01:13.785 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.803 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:01:13.803 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.821 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:01:13.821 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:01:13.839 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:01:13.839 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T06:01:13.842 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T06:01:13.842 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm04 2026-03-10T06:01:13.842 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:01:13.873 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm06 2026-03-10T06:01:13.873 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:01:13.899 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/920/remote/vm08 2026-03-10T06:01:13.899 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:01:13.927 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T06:01:13.927 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:01:13.929 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:01:13.941 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:01:13.984 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T06:01:13.986 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T06:01:13.987 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T06:01:13.989 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T06:01:13.989 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:01:13.991 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:01:13.996 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:01:14.008 INFO:teuthology.orchestra.run.vm04.stdout: 8532144 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:01 /home/ubuntu/cephtest 2026-03-10T06:01:14.010 INFO:teuthology.orchestra.run.vm06.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:01 /home/ubuntu/cephtest 2026-03-10T06:01:14.039 INFO:teuthology.orchestra.run.vm08.stdout: 8532146 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 06:01 /home/ubuntu/cephtest 2026-03-10T06:01:14.040 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T06:01:14.046 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} duration: 813.9026966094971 flavor: default owner: kyr success: true 2026-03-10T06:01:14.046 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T06:01:14.067 INFO:teuthology.run:pass