I forgot the additional logs.

Please guys, any help... (insert scream here).

On 03/02/2020 01:20, Christian Reiss wrote:
Hey folks,

oh Jesus. 3-Way HCI. Gluster w/o any issues:

[root@node01:/var/log/glusterfs] # gluster vol info  ssd_storage

Volume Name: ssd_storage
Type: Replicate
Volume ID: d84ec99a-5db9-49c6-aab4-c7481a1dc57b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick2: node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick3: node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enab


[root@node01:/var/log/glusterfs] # gluster vol status  ssd_storage
Status of volume: ssd_storage
Gluster process                             TCP Port  RDMA Port  Online Pid ------------------------------------------------------------------------------
Brick node01.company.com:/gluster_br
icks/ssd_storage/ssd_storage                49152     0          Y 63488
Brick node02.company.com:/gluster_br
icks/ssd_storage/ssd_storage                49152     0          Y 18860
Brick node03.company.com:/gluster_br
icks/ssd_storage/ssd_storage                49152     0          Y 15262
Self-heal Daemon on localhost               N/A       N/A        Y 63511
Self-heal Daemon on node03.dc-dus.dalason.n
et                                          N/A       N/A        Y 15285
Self-heal Daemon on 10.100.200.12           N/A       N/A        Y 18883

Task Status of Volume ssd_storage
------------------------------------------------------------------------------
There are no active volume tasks



[root@node01:/var/log/glusterfs] # gluster vol heal ssd_storage info
Brick node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0

Brick node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0

Brick node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0



And everything is mounted where its supposed to. But no VMs start due to IO Error. I checked a gluster-based file (CentOS iso) md5 against a local copy, it matches. One VM at one point managed to start, but failed subsequent starts. The data/disks seem okay,

/var/log/glusterfs/"rhev-data-center-mnt-glusterSD-node01.company.com:_ssd__storage.log-20200202" has entries like:


[2020-02-01 23:15:15.449902] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1: remote operation failed. Path: /.shard/86da0289-f74f-4200-9284-678e7bd76195.1405 (00000000-0000-0000-0000-000000000000) [Permission denied] [2020-02-01 23:15:15.484363] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1: remote operation failed. Path: /.shard/86da0289-f74f-4200-9284-678e7bd76195.1400 (00000000-0000-0000-0000-000000000000) [Permission denied]


Before this happened we put one host into maintenance mode, it all started during migration.

Any help? We're sweating blood here.




--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss

MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,411::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,411::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,492::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,508::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,587::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/3::DEBUG::2020-02-03 01:21:31,587::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,394::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,394::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,476::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,478::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,560::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:32,560::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,631::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,631::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,710::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,722::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,801::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:32,801::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:32,944::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:32,944::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:33,023::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:33,025::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:33,105::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/6::DEBUG::2020-02-03 01:21:33,105::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,179::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,179::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,259::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,261::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,340::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2020-02-03 01:21:33,341::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:37,885::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call peerStatus with () {}
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:37,885::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:37,967::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:37,969::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster system:: uuid get (cwd None)
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:38,050::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/5::DEBUG::2020-02-03 01:21:38,050::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return peerStatus with [{'status': 'CONNECTED', 'hostname': '10.100.200.11/24', 'uuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'status': 'CONNECTED', 'hostname': 'node03.example.com', 'uuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}, {'status': 'CONNECTED', 'hostname': '10.100.200.12', 'uuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}]
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:38,057::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (None, None) {}
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:38,057::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --xml (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:38,138::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-02-03 01:21:38,139::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return volumeInfo with {'ssd_storage': {'transportType': ['TCP'], 'uuid': 'd84ec99a-5db9-49c6-aab4-c7481a1dc57b', 'disperseCount': '0', 'bricks': ['node01.example.com:/gluster_bricks/ssd_storage/ssd_storage', 'node02.example.com:/gluster_bricks/ssd_storage/ssd_storage', 'node03.example.com:/gluster_bricks/ssd_storage/ssd_storage'], 'volumeName': 'ssd_storage', 'volumeType': 'REPLICATE', 'replicaCount': '3', 'brickCount': '3', 'redundancyCount': '0', 'isArbiter': False, 'distCount': '3', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'isArbiter': False, 'name': 'node01.example.com:/gluster_bricks/ssd_storage/ssd_storage', 'hostUuid': 'ada2890c-f8cf-4f9f-b99b-90fe302af2b7'}, {'isArbiter': False, 'name': 'node02.example.com:/gluster_bricks/ssd_storage/ssd_storage', 'hostUuid': 'd686b35b-addb-44bf-bc64-7d763325b90a'}, {'isArbiter': False, 'name': 'node03.example.com:/gluster_bricks/ssd_storage/ssd_storage', 'hostUuid': '19e7f6e0-6be6-4362-908a-c509a9a65463'}], 'options': {'performance.client-io-threads': 'on', 'network.ping-timeout': '30', 'user.cifs': 'off', 'cluster.self-heal-daemon': 'enable', 'performance.strict-o-direct': 'on', 'cluster.eager-lock': 'enable', 'network.remote-dio': 'off', 'features.shard': 'on', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'cluster.choose-local': 'off', 'storage.owner-gid': '36', 'cluster.locking-scheme': 'granular', 'performance.low-prio-threads': '32', 'cluster.shd-wait-qlength': '10000', 'nfs.disable': 'on', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'cluster.data-self-heal-algorithm': 'full', 'client.event-threads': '4', 'transport.address-family': 'inet', 'cluster.granular-entry-heal': 'enable', 'cluster.server-quorum-type': 'server', 'cluster.shd-max-threads': '8', 'server.event-threads': '4', 'performance.read-ahead': 'off'}}}

2020-02-03 01:20:41,566+0100 INFO  (jsonrpc/1) [api.host] START getAllVmStats() from=::1,58970 (api:48)
2020-02-03 01:20:41,567+0100 INFO  (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,58970 (api:54)
2020-02-03 01:20:41,567+0100 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2020-02-03 01:20:42,773+0100 INFO  (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'9e6c3132-32f0-11ea-86bc-002590b8ddd6', options=None) from=::ffff:10.100.200.10,49816, task_id=af9089be-6126-4fe5-a4cd-560b557ba22b (api:48)
2020-02-03 01:20:42,778+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 22L}} from=::ffff:10.100.200.10,49816, task_id=af9089be-6126-4fe5-a4cd-560b557ba22b (api:54)
2020-02-03 01:20:42,778+0100 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:312)
2020-02-03 01:20:42,786+0100 INFO  (jsonrpc/2) [vdsm.api] START getStoragePoolInfo(spUUID=u'9e6c3132-32f0-11ea-86bc-002590b8ddd6', options=None) from=::ffff:10.100.200.10,49936, task_id=12f61eef-5d6b-4ac2-b598-4fb70b5f1497 (api:48)
2020-02-03 01:20:42,791+0100 INFO  (jsonrpc/2) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': '', 'pool_status': 'connected', 'lver': 22L, 'domains': u'fec2eb5e-21b5-496b-9ea5-f718b2cb5556:Active', 'master_uuid': u'fec2eb5e-21b5-496b-9ea5-f718b2cb5556', 'version': '5', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 105}, 'dominfo': {u'fec2eb5e-21b5-496b-9ea5-f718b2cb5556': {'status': u'Active', 'diskfree': '7307209584640', 'isoprefix': '', 'alerts': [], 'disktotal': '9554169954304', 'version': 5}}} from=::ffff:10.100.200.10,49936, task_id=12f61eef-5d6b-4ac2-b598-4fb70b5f1497 (api:54)
2020-02-03 01:20:42,791+0100 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 seconds (__init__:312)
2020-02-03 01:20:42,806+0100 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in 0.17 seconds (__init__:312)
2020-02-03 01:20:42,932+0100 INFO  (jsonrpc/5) [api.virt] START create(vmParams={u'xml': u'<?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="http://ovirt.org/vm/1.0";><name>docker01.company.com</name><uuid>e39fd732-29a9-45aa-98a5-dbd9a8b9b724</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">33554432</maxMemory><vcpu current="8">16</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">oVirt</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">e39fd732-29a9-45aa-98a5-dbd9a8b9b724</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"></timer><timer name="pit" tickpolicy="delay"></timer><timer name="hpet" present="no"></timer></clock><features><acpi></acpi></features><cpu match="exact"><model>EPYC</model><topology cores="1" threads="1" sockets="16"></topology><numa><cell id="0" cpus="0,1,2,3,4,5,6,7" memory="16777216"></cell></numa></cpu><cputune></cputune><devices><input type="tablet" bus="usb"></input><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"></target><source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.ovirt-guest-agent.0"></source></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"></target><source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.org.qemu.guest_agent.0"></source></channel><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"></channel><channel name="inputs" mode="secure"></channel><channel name="cursor" mode="secure"></channel><channel name="playback" mode="secure"></channel><channel name="record" mode="secure"></channel><channel name="display" mode="secure"></channel><channel name="smartcard" mode="secure"></channel><channel name="usbredir" mode="secure"></channel><listen type="network" network="vdsm-ovirtmgmt"></listen></graphics><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-4cabd1d6-0058-4376-8277-e7c2de1c0391"></alias></rng><memballoon model="virtio"><stats period="5"></stats><alias name="ua-4dd2d46d-6522-46e9-b82a-6d57808a74ca"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"></address></memballoon><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"></driver><alias name="ua-4f06c2bb-8a59-4eb2-b045-87f47ca5d729"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"></address></controller><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"></model><alias name="ua-5f8e5f24-0ccc-4172-a38f-a413881108cf"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"></address></video><controller type="ide" index="0"><address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"></address></controller><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"></listen></graphics><controller type="usb" model="piix3-uhci" index="0"><address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"></address></controller><watchdog model="i6300esb" action="reset"><alias name="ua-c0e56946-a8c1-4514-b4a0-48eca24028c4"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x08" type="pci"></address></watchdog><controller type="virtio-serial" index="0" ports="16"><alias name="ua-ecaf6c1f-822a-4d9a-ae16-c396cb8d8c4f"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"></address></controller><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"></target></channel><interface type="bridge"><model type="virtio"></model><link state="up"></link><source bridge="dc-dus-public"></source><driver queues="4" name="vhost"></driver><alias name="ua-ad0da345-548a-427d-b5bc-3405854199dc"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"></address><mac address="56:6f:e1:75:00:13"></mac><mtu size="1500"></mtu><bandwidth></bandwidth></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"></driver><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"></seclabel></source><target dev="hdc" bus="ide"></target><readonly></readonly><alias name="ua-b51dc6a4-0c23-43b1-9933-d2191d205f97"></alias><address bus="1" controller="0" unit="0" type="drive" target="0"></address></disk><disk snapshot="no" type="file" device="disk"><target dev="sda" bus="scsi"></target><source file="/rhev/data-center/9e6c3132-32f0-11ea-86bc-002590b8ddd6/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2"><seclabel model="dac" type="none" relabel="no"></seclabel></source><driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"></driver><alias name="ua-8d6e1948-5fbb-48a0-9e73-d26db5031351"></alias><address bus="0" controller="0" unit="0" type="drive" target="0"></address><boot order="1"></boot><serial>8d6e1948-5fbb-48a0-9e73-d26db5031351</serial></disk></devices><pm><suspend-to-disk enabled="no"></suspend-to-disk><suspend-to-mem enabled="no"></suspend-to-mem></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"></smbios></os><metadata><ovirt-tune:qos></ovirt-tune:qos><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom></ovirt-vm:custom><ovirt-vm:device mac_address="56:6f:e1:75:00:13"><ovirt-vm:custom></ovirt-vm:custom></ovirt-vm:device><ovirt-vm:device devtype="disk" name="sda"><ovirt-vm:poolID>9e6c3132-32f0-11ea-86bc-002590b8ddd6</ovirt-vm:poolID><ovirt-vm:volumeID>886c30b7-1a91-449d-a232-cc68eb6b67f2</ovirt-vm:volumeID><ovirt-vm:imageID>8d6e1948-5fbb-48a0-9e73-d26db5031351</ovirt-vm:imageID><ovirt-vm:domainID>fec2eb5e-21b5-496b-9ea5-f718b2cb5556</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>'}) from=::ffff:10.100.200.10,49816, flow_id=024bed2e-1980-449d-9305-c10032dd5f79, vmId= (api:48)
2020-02-03 01:20:42,973+0100 INFO  (jsonrpc/5) [api.virt] FINISH create return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'maxMemSize': 32768, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'tabletEnable': 'true', 'vmId': 'e39fd732-29a9-45aa-98a5-dbd9a8b9b724', 'memGuaranteedSize': 8192, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'EPYC', 'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '8', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3,4,5,6,7', 'memory': '16384'}], u'xml': u'<?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="http://ovirt.org/vm/1.0";><name>docker01.company.com</name><uuid>e39fd732-29a9-45aa-98a5-dbd9a8b9b724</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">33554432</maxMemory><vcpu current="8">16</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">oVirt</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">e39fd732-29a9-45aa-98a5-dbd9a8b9b724</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"></timer><timer name="pit" tickpolicy="delay"></timer><timer name="hpet" present="no"></timer></clock><features><acpi></acpi></features><cpu match="exact"><model>EPYC</model><topology cores="1" threads="1" sockets="16"></topology><numa><cell id="0" cpus="0,1,2,3,4,5,6,7" memory="16777216"></cell></numa></cpu><cputune></cputune><devices><input type="tablet" bus="usb"></input><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"></target><source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.ovirt-guest-agent.0"></source></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"></target><source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.org.qemu.guest_agent.0"></source></channel><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"></channel><channel name="inputs" mode="secure"></channel><channel name="cursor" mode="secure"></channel><channel name="playback" mode="secure"></channel><channel name="record" mode="secure"></channel><channel name="display" mode="secure"></channel><channel name="smartcard" mode="secure"></channel><channel name="usbredir" mode="secure"></channel><listen type="network" network="vdsm-ovirtmgmt"></listen></graphics><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-4cabd1d6-0058-4376-8277-e7c2de1c0391"></alias></rng><memballoon model="virtio"><stats period="5"></stats><alias name="ua-4dd2d46d-6522-46e9-b82a-6d57808a74ca"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"></address></memballoon><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"></driver><alias name="ua-4f06c2bb-8a59-4eb2-b045-87f47ca5d729"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"></address></controller><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"></model><alias name="ua-5f8e5f24-0ccc-4172-a38f-a413881108cf"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"></address></video><controller type="ide" index="0"><address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"></address></controller><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"></listen></graphics><controller type="usb" model="piix3-uhci" index="0"><address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"></address></controller><watchdog model="i6300esb" action="reset"><alias name="ua-c0e56946-a8c1-4514-b4a0-48eca24028c4"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x08" type="pci"></address></watchdog><controller type="virtio-serial" index="0" ports="16"><alias name="ua-ecaf6c1f-822a-4d9a-ae16-c396cb8d8c4f"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"></address></controller><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"></target></channel><interface type="bridge"><model type="virtio"></model><link state="up"></link><source bridge="dc-dus-public"></source><driver queues="4" name="vhost"></driver><alias name="ua-ad0da345-548a-427d-b5bc-3405854199dc"></alias><address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"></address><mac address="56:6f:e1:75:00:13"></mac><mtu size="1500"></mtu><bandwidth></bandwidth></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"></driver><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"></seclabel></source><target dev="hdc" bus="ide"></target><readonly></readonly><alias name="ua-b51dc6a4-0c23-43b1-9933-d2191d205f97"></alias><address bus="1" controller="0" unit="0" type="drive" target="0"></address></disk><disk snapshot="no" type="file" device="disk"><target dev="sda" bus="scsi"></target><source file="/rhev/data-center/9e6c3132-32f0-11ea-86bc-002590b8ddd6/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2"><seclabel model="dac" type="none" relabel="no"></seclabel></source><driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"></driver><alias name="ua-8d6e1948-5fbb-48a0-9e73-d26db5031351"></alias><address bus="0" controller="0" unit="0" type="drive" target="0"></address><boot order="1"></boot><serial>8d6e1948-5fbb-48a0-9e73-d26db5031351</serial></disk></devices><pm><suspend-to-disk enabled="no"></suspend-to-disk><suspend-to-mem enabled="no"></suspend-to-mem></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"></smbios></os><metadata><ovirt-tune:qos></ovirt-tune:qos><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom></ovirt-vm:custom><ovirt-vm:device mac_address="56:6f:e1:75:00:13"><ovirt-vm:custom></ovirt-vm:custom></ovirt-vm:device><ovirt-vm:device devtype="disk" name="sda"><ovirt-vm:poolID>9e6c3132-32f0-11ea-86bc-002590b8ddd6</ovirt-vm:poolID><ovirt-vm:volumeID>886c30b7-1a91-449d-a232-cc68eb6b67f2</ovirt-vm:volumeID><ovirt-vm:imageID>8d6e1948-5fbb-48a0-9e73-d26db5031351</ovirt-vm:imageID><ovirt-vm:domainID>fec2eb5e-21b5-496b-9ea5-f718b2cb5556</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>', 'smpCoresPerSocket': '1', 'kvmEnable': 'true', 'bootMenuEnable': 'false', 'devices': [], 'custom': {}, 'maxVCpus': '16', 'numOfIoThreads': '1', 'statusTime': '4299874250', 'vmName': 'docker01.company.com', 'maxMemSlots': 16}} from=::ffff:10.100.200.10,49816, flow_id=024bed2e-1980-449d-9305-c10032dd5f79, vmId= (api:54)
2020-02-03 01:20:42,973+0100 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.04 seconds (__init__:312)
2020-02-03 01:20:42,974+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') VM wrapper has started (vm:2782)
2020-02-03 01:20:42,995+0100 INFO  (vm/e39fd732) [vdsm.api] START getVolumeSize(sdUUID='fec2eb5e-21b5-496b-9ea5-f718b2cb5556', spUUID='9e6c3132-32f0-11ea-86bc-002590b8ddd6', imgUUID='8d6e1948-5fbb-48a0-9e73-d26db5031351', volUUID='886c30b7-1a91-449d-a232-cc68eb6b67f2', options=None) from=internal, task_id=205fd0b1-9c0c-4bf9-ac76-fe40c8ca1cee (api:48)
2020-02-03 01:20:42,998+0100 INFO  (vm/e39fd732) [vdsm.api] FINISH getVolumeSize return={'truesize': '18552459264', 'apparentsize': '214748364800'} from=internal, task_id=205fd0b1-9c0c-4bf9-ac76-fe40c8ca1cee (api:54)
2020-02-03 01:20:42,998+0100 INFO  (vm/e39fd732) [vds] prepared volume path:  (clientIF:510)
2020-02-03 01:20:42,998+0100 INFO  (vm/e39fd732) [vdsm.api] START prepareImage(sdUUID='fec2eb5e-21b5-496b-9ea5-f718b2cb5556', spUUID='9e6c3132-32f0-11ea-86bc-002590b8ddd6', imgUUID='8d6e1948-5fbb-48a0-9e73-d26db5031351', leafUUID='886c30b7-1a91-449d-a232-cc68eb6b67f2', allowIllegal=False) from=internal, task_id=f09acd65-c904-44da-8133-485c69d65a68 (api:48)
2020-02-03 01:20:43,035+0100 INFO  (vm/e39fd732) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2 (fileSD:615)
2020-02-03 01:20:43,036+0100 INFO  (vm/e39fd732) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556' (fileSD:569)
2020-02-03 01:20:43,036+0100 INFO  (vm/e39fd732) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556 mode: None (fileUtils:199)
2020-02-03 01:20:43,036+0100 INFO  (vm/e39fd732) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351 to /var/run/vdsm/storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/8d6e1948-5fbb-48a0-9e73-d26db5031351 (fileSD:572)
2020-02-03 01:20:43,192+0100 INFO  (vm/e39fd732) [vdsm.api] FINISH prepareImage return={'info': {'path': u'ssd_storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': 'node01.company.com'}, {'port': '0', 'transport': 'tcp', 'name': 'node02.company.com'}, {'port': '0', 'transport': 'tcp', 'name': 'node03.company.com'}], 'protocol': 'gluster'}, 'path': u'/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2', 'imgVolumesInfo': [{'domainID': 'fec2eb5e-21b5-496b-9ea5-f718b2cb5556', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2', 'volumeID': u'886c30b7-1a91-449d-a232-cc68eb6b67f2', 'leasePath': u'/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2.lease', 'imageID': '8d6e1948-5fbb-48a0-9e73-d26db5031351'}]} from=internal, task_id=f09acd65-c904-44da-8133-485c69d65a68 (api:54)
2020-02-03 01:20:43,193+0100 INFO  (vm/e39fd732) [vds] prepared volume path: /rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2 (clientIF:510)
2020-02-03 01:20:43,193+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') Enabling drive monitoring (drivemonitor:56)
2020-02-03 01:20:43,201+0100 WARN  (vm/e39fd732) [root] Attempting to add an existing net user: ovirtmgmt/e39fd732-29a9-45aa-98a5-dbd9a8b9b724 (libvirtnetwork:190)
2020-02-03 01:20:43,231+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') drive 'hdc' path: 'file=' -> '*file=' (storagexml:337)
2020-02-03 01:20:43,232+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') drive 'sda' path: 'file=/rhev/data-center/9e6c3132-32f0-11ea-86bc-002590b8ddd6/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2' -> u'*file=/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2' (storagexml:337)
2020-02-03 01:20:43,332+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_device_create/10_ovirt_provider_ovn_hook: rc=0 err= (hooks:114)
2020-02-03 01:20:43,424+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_device_create/20_ovirt_provider_ovn_vhostuser_hook: rc=0 err= (hooks:114)
2020-02-03 01:20:43,536+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in 0.18 seconds (__init__:312)
2020-02-03 01:20:43,583+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_device_create/50_openstacknet: rc=0 err= (hooks:114)
2020-02-03 01:20:43,680+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_device_create/50_vmfex: rc=0 err= (hooks:114)
2020-02-03 01:20:43,832+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py: rc=0 err= (hooks:114)
2020-02-03 01:20:43,939+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err= (hooks:114)
2020-02-03 01:20:44,030+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:114)
2020-02-03 01:20:44,034+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') <?xml version='1.0' encoding='utf-8'?>
<domain xmlns:ns0="http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="http://ovirt.org/vm/1.0"; type="kvm">
    <name>docker01.company.com</name>
    <uuid>e39fd732-29a9-45aa-98a5-dbd9a8b9b724</uuid>
    <memory>16777216</memory>
    <currentMemory>16777216</currentMemory>
    <iothreads>1</iothreads>
    <maxMemory slots="16">33554432</maxMemory>
    <vcpu current="8">16</vcpu>
    <sysinfo type="smbios">
        <system>
            <entry name="manufacturer">oVirt</entry>
            <entry name="product">oVirt Node</entry>
            <entry name="version">7-7.1908.0.el7.centos</entry>
            <entry name="serial">00000000-0000-0000-0000-002590bddabc</entry>
            <entry name="uuid">e39fd732-29a9-45aa-98a5-dbd9a8b9b724</entry>
        </system>
    </sysinfo>
    <clock adjustment="0" offset="variable">
        <timer name="rtc" tickpolicy="catchup" />
        <timer name="pit" tickpolicy="delay" />
        <timer name="hpet" present="no" />
    </clock>
    <features>
        <acpi />
    </features>
    <cpu match="exact">
        <model>EPYC</model>
        <topology cores="1" sockets="16" threads="1" />
        <numa>
            <cell cpus="0,1,2,3,4,5,6,7" id="0" memory="16777216" />
        </numa>
    </cpu>
    <cputune />
    <devices>
        <input bus="usb" type="tablet" />
        <channel type="unix">
            <target name="ovirt-guest-agent.0" type="virtio" />
            <source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.ovirt-guest-agent.0" />
        </channel>
        <channel type="unix">
            <target name="org.qemu.guest_agent.0" type="virtio" />
            <source mode="bind" path="/var/lib/libvirt/qemu/channels/e39fd732-29a9-45aa-98a5-dbd9a8b9b724.org.qemu.guest_agent.0" />
        </channel>
        <graphics autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
            <channel mode="secure" name="main" />
            <channel mode="secure" name="inputs" />
            <channel mode="secure" name="cursor" />
            <channel mode="secure" name="playback" />
            <channel mode="secure" name="record" />
            <channel mode="secure" name="display" />
            <channel mode="secure" name="smartcard" />
            <channel mode="secure" name="usbredir" />
            <listen network="vdsm-ovirtmgmt" type="network" />
        </graphics>
        <rng model="virtio">
            <backend model="random">/dev/urandom</backend>
            <alias name="ua-4cabd1d6-0058-4376-8277-e7c2de1c0391" />
        </rng>
        <memballoon model="virtio">
            <stats period="5" />
            <alias name="ua-4dd2d46d-6522-46e9-b82a-6d57808a74ca" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci" />
        </memballoon>
        <controller index="0" model="virtio-scsi" type="scsi">
            <driver iothread="1" />
            <alias name="ua-4f06c2bb-8a59-4eb2-b045-87f47ca5d729" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci" />
        </controller>
        <video>
            <model heads="1" ram="65536" type="qxl" vgamem="16384" vram="32768" />
            <alias name="ua-5f8e5f24-0ccc-4172-a38f-a413881108cf" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci" />
        </video>
        <controller index="0" type="ide">
            <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci" />
        </controller>
        <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc">
            <listen network="vdsm-ovirtmgmt" type="network" />
        </graphics>
        <controller index="0" model="piix3-uhci" type="usb">
            <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci" />
        </controller>
        <watchdog action="reset" model="i6300esb">
            <alias name="ua-c0e56946-a8c1-4514-b4a0-48eca24028c4" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x08" type="pci" />
        </watchdog>
        <controller index="0" ports="16" type="virtio-serial">
            <alias name="ua-ecaf6c1f-822a-4d9a-ae16-c396cb8d8c4f" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci" />
        </controller>
        <channel type="spicevmc">
            <target name="com.redhat.spice.0" type="virtio" />
        </channel>
        <disk device="cdrom" snapshot="no" type="file">
            <driver error_policy="report" name="qemu" type="raw" />
            <source file="" startupPolicy="optional">
                <seclabel model="dac" relabel="no" type="none" />
            </source>
            <target bus="ide" dev="hdc" />
            <readonly />
            <alias name="ua-b51dc6a4-0c23-43b1-9933-d2191d205f97" />
            <address bus="1" controller="0" target="0" type="drive" unit="0" />
        </disk>
        <disk device="disk" snapshot="no" type="file">
            <target bus="scsi" dev="sda" />
            <source file="/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images/8d6e1948-5fbb-48a0-9e73-d26db5031351/886c30b7-1a91-449d-a232-cc68eb6b67f2">
                <seclabel model="dac" relabel="no" type="none" />
            </source>
            <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw" />
            <alias name="ua-8d6e1948-5fbb-48a0-9e73-d26db5031351" />
            <address bus="0" controller="0" target="0" type="drive" unit="0" />
            <boot order="1" />
            <serial>8d6e1948-5fbb-48a0-9e73-d26db5031351</serial>
        </disk>
        <interface type="bridge">
            <model type="virtio" />
            <link state="up" />
            <source bridge="dc-dus-public" />
            <driver name="vhost" queues="4" />
            <alias name="ua-ad0da345-548a-427d-b5bc-3405854199dc" />
            <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
            <mac address="56:6f:e1:75:00:13" />
            <mtu size="1500" />
            <bandwidth />
        </interface>
    </devices>
    <pm>
        <suspend-to-disk enabled="no" />
        <suspend-to-mem enabled="no" />
    </pm>
    <os>
        <type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type>
        <smbios mode="sysinfo" />
    </os>
    <metadata>
        <ns0:qos />
        <ovirt-vm:vm>
            <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
            <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
            <ovirt-vm:custom />
            <ovirt-vm:device mac_address="56:6f:e1:75:00:13">
                <ovirt-vm:custom />
            </ovirt-vm:device>
            <ovirt-vm:device devtype="disk" name="sda">
                <ovirt-vm:poolID>9e6c3132-32f0-11ea-86bc-002590b8ddd6</ovirt-vm:poolID>
                <ovirt-vm:volumeID>886c30b7-1a91-449d-a232-cc68eb6b67f2</ovirt-vm:volumeID>
                <ovirt-vm:imageID>8d6e1948-5fbb-48a0-9e73-d26db5031351</ovirt-vm:imageID>
                <ovirt-vm:domainID>fec2eb5e-21b5-496b-9ea5-f718b2cb5556</ovirt-vm:domainID>
            </ovirt-vm:device>
            <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
            <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior>
        </ovirt-vm:vm>
    </metadata>
</domain>
 (vm:2886)
2020-02-03 01:20:44,772+0100 INFO  (libvirt/events) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') CPU running: onResume (vm:6062)
2020-02-03 01:20:45,031+0100 INFO  (libvirt/events) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') abnormal vm stop device ua-8d6e1948-5fbb-48a0-9e73-d26db5031351 error eother (vm:5075)
2020-02-03 01:20:45,031+0100 INFO  (libvirt/events) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') CPU stopped: onIOError (vm:6062)
2020-02-03 01:20:45,035+0100 INFO  (libvirt/events) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') CPU stopped: onSuspend (vm:6062)
2020-02-03 01:20:45,036+0100 WARN  (libvirt/events) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') device sda reported I/O error (vm:4001)
2020-02-03 01:20:45,121+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/after_vm_start/50_openstacknet: rc=0 err= (hooks:114)
2020-02-03 01:20:45,273+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/after_vm_start/openstacknet_utils.py: rc=0 err= (hooks:114)
2020-02-03 01:20:45,426+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/after_device_create/50_openstacknet: rc=0 err= (hooks:114)
2020-02-03 01:20:45,580+0100 INFO  (vm/e39fd732) [root] /usr/libexec/vdsm/hooks/after_device_create/openstacknet_utils.py: rc=0 err= (hooks:114)
2020-02-03 01:20:45,590+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') Starting connection (guestagent:256)
2020-02-03 01:20:45,592+0100 INFO  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') CPU stopped: domain initialization (vm:6062)
2020-02-03 01:20:45,593+0100 WARN  (vm/e39fd732) [virt.vm] (vmId='e39fd732-29a9-45aa-98a5-dbd9a8b9b724') device sda reported I/O error (vm:4001)

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUHBQ66PFWX3OEGXTJP27DRIFX4RE5B4/

Reply via email to