Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just fine, but I 
don’t use snapshots. You can work around the HA issue with DNS and backup 
server entries on the storage domain as well. Worth it to me for the 
performance, YMMV.

> On Feb 12, 2020, at 8:04 AM, Jayme <jay...@gmail.com> wrote:
> 
> From my understanding it's not a default option but many users are still 
> using libgfapi successfully. I'm not sure about its status in the latest 
> 4.3.8 release but I know it is/was working for people in previous versions. 
> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it 
> should still be working otherwise, unless like I said something changed in 
> more recent releases of oVirt.
> 
> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese 
> <guillaume.pav...@interactiv-group.com 
> <mailto:guillaume.pav...@interactiv-group.com>> wrote:
> Libgfapi is not supported because of an old bug in qemu. That qemu bug is 
> slowly getting fixed, but the bugs about Libgfapi support in ovirt have since 
> been closed as WONTFIX and DEFERRED
> 
> See :
> https://bugzilla.redhat.com/show_bug.cgi?id=1465810 
> <https://bugzilla.redhat.com/show_bug.cgi?id=1465810>
> https://bugzilla.redhat.com/show_bug.cgi?id=1484660 
> <https://bugzilla.redhat.com/show_bug.cgi?id=1484227> : "No plans to enable 
> libgfapi in RHHI-V for now. Closing this bug"
> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 
> <https://bugzilla.redhat.com/show_bug.cgi?id=1484227> : "No plans to enable 
> libgfapi in RHHI-V for now. Closing this bug"
> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 
> <https://bugzilla.redhat.com/show_bug.cgi?id=1633642> : "Closing this as no 
> action taken from long back.Please reopen if required."
> 
> Would be nice if someone could reopen the closed bugs so this feature doesn't 
> get forgotten
> 
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> 
> 
> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho <s.pani...@gmail.com 
> <mailto:s.pani...@gmail.com>> wrote:
> I used the cockpit-based hc setup and "option rpc-auth-allow-insecure" is 
> absent from /etc/glusterfs/glusterd.vol.
> 
> I'm going to redo the cluster this week and report back. Thanks for the tip!
> 
> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic <bu...@onholyground.com 
> <mailto:bu...@onholyground.com>> wrote:
> The hosts will still mount the volume via FUSE, but you might double check 
> you set the storage up as Gluster and not NFS.
> 
> Then gluster used to need some config in glusterd.vol to set 
> 
>     option rpc-auth-allow-insecure on
> 
> I’m not sure if that got added to a hyper converged setup or not, but I’d 
> check it.
> 
>> On Feb 10, 2020, at 4:41 PM, Stephen Panicho <s.pani...@gmail.com 
>> <mailto:s.pani...@gmail.com>> wrote:
>> 
>> No, this was a relatively new cluster-- only a couple days old. Just a 
>> handful of VMs including the engine.
>> 
>> On Mon, Feb 10, 2020 at 5:26 PM Jayme <jay...@gmail.com 
>> <mailto:jay...@gmail.com>> wrote:
>> Curious do the vms have active snapshots?
>> 
>> On Mon, Feb 10, 2020 at 5:59 PM <s.pani...@gmail.com 
>> <mailto:s.pani...@gmail.com>> wrote:
>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on 
>> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard 
>> about libgfapi, so I thought I'd give it a shot. Looking through the 
>> documentation, followed by lots of threads and BZ reports, I've done the 
>> following to enable it:
>> 
>> First, I shut down all VMs except the engine. Then...
>> 
>> On the hosts:
>> 1. setsebool -P virt_use_glusterfs on
>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>> 
>> On the engine VM:
>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>> 2. systemctl restart ovirt-engine
>> 
>> VMs now fail to launch. Am I doing this correctly? I should also note that 
>> the hosts still have the Gluster domain mounted via FUSE.
>> 
>> Here's a relevant bit from engine.log:
>> 
>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive 
>> file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native
>>  
>> <http://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native>:
>>  Could not read qcow2 header: Invalid argument.
>> 
>> The full engine.log from one of the attempts:
>> 
>> 2020-02-06 16:38:24,909Z INFO  
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
>> (ForkJoinPool-1-worker-12) [] add VM 
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>> 2020-02-06 16:38:25,010Z ERROR 
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
>> (ForkJoinPool-1-worker-12) [] Rerun VM 
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS 
>> 'node2.ovirt.trashnet.xyz <http://node2.ovirt.trashnet.xyz/>'
>> 2020-02-06 16:38:25,091Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: 
>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host 
>> node2.ovirt.trashnet.xyz <http://node2.ovirt.trashnet.xyz/>.
>> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object 
>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', 
>> sharedLocks=''}'
>> 2020-02-06 16:38:25,179Z INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>>  log id: 2107f52a
>> 2020-02-06 16:38:25,181Z INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
>> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command: RunVmCommand 
>> internal: false. Entities affected :  ID: 
>> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role 
>> type USER
>> 2020-02-06 16:38:25,313Z INFO  
>> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine 
>> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for 
>> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
>> 2020-02-06 16:38:25,382Z INFO  
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, 
>> UpdateVmDynamicDataVDSCommand( 
>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', 
>> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
>>  log id: 4a83911f
>> 2020-02-06 16:38:25,417Z INFO  
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
>> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
>> 2020-02-06 16:38:25,418Z INFO  
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand( 
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', 
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 
>> 5e07ba66
>> 2020-02-06 16:38:25,420Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, 
>> CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz 
>> <http://node1.ovirt.trashnet.xyz/>, 
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', 
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 
>> 1bfa03c4
>> 2020-02-06 16:38:25,424Z INFO  
>> [org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Kernel FIPS - Guid: 
>> c3465ca2-395e-4c0c-b72e-b5b7153df452 fips: false
>> 2020-02-06 16:38:25,435Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] VM <?xml version="1.0" 
>> encoding="UTF-8"?><domain type="kvm" 
>> xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0 
>> <http://ovirt.org/vm/tune/1.0>" xmlns:ovirt-vm="http://ovirt.org/vm/1.0 
>> <http://ovirt.org/vm/1.0>">
>>   <name>yumcache</name>
>>   <uuid>df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</uuid>
>>   <memory>1048576</memory>
>>   <currentMemory>1048576</currentMemory>
>>   <iothreads>1</iothreads>
>>   <maxMemory slots="16">4194304</maxMemory>
>>   <vcpu current="1">16</vcpu>
>>   <sysinfo type="smbios">
>>     <system>
>>       <entry name="manufacturer">oVirt</entry>
>>       <entry name="product">OS-NAME:</entry>
>>       <entry name="version">OS-VERSION:</entry>
>>       <entry name="serial">HOST-SERIAL:</entry>
>>       <entry name="uuid">df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</entry>
>>     </system>
>>   </sysinfo>
>>   <clock offset="variable" adjustment="0">
>>     <timer name="rtc" tickpolicy="catchup"/>
>>     <timer name="pit" tickpolicy="delay"/>
>>     <timer name="hpet" present="no"/>
>>   </clock>
>>   <features>
>>     <acpi/>
>>   </features>
>>   <cpu match="exact">
>>     <model>EPYC</model>
>>     <feature name="ibpb" policy="require"/>
>>     <feature name="virt-ssbd" policy="require"/>
>>     <topology cores="1" threads="1" sockets="16"/>
>>     <numa>
>>       <cell id="0" cpus="0" memory="1048576"/>
>>     </numa>
>>   </cpu>
>>   <cputune/>
>>   <devices>
>>     <input type="tablet" bus="usb"/>
>>     <channel type="unix">
>>       <target type="virtio" name="ovirt-guest-agent.0"/>
>>       <source mode="bind" 
>> path="/var/lib/libvirt/qemu/channels/df9dbac4-35c0-40ee-acd4-a1cfc959aa8b.ovirt-guest-agent.0"/>
>>     </channel>
>>     <channel type="unix">
>>       <target type="virtio" name="org.qemu.guest_agent.0"/>
>>       <source mode="bind" 
>> path="/var/lib/libvirt/qemu/channels/df9dbac4-35c0-40ee-acd4-a1cfc959aa8b.org.qemu.guest_agent.0"/>
>>     </channel>
>>     <controller type="pci" model="pcie-root-port" index="1">
>>       <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" 
>> type="pci" multifunction="on"/>
>>     </controller>
>>     <memballoon model="virtio">
>>       <stats period="5"/>
>>       <alias name="ua-27c77007-3a3c-4431-958d-90fd1c7257dd"/>
>>       <address bus="0x05" domain="0x0000" function="0x0" slot="0x00" 
>> type="pci"/>
>>     </memballoon>
>>     <controller type="pci" model="pcie-root-port" index="2">
>>       <address bus="0x00" domain="0x0000" function="0x1" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="9">
>>       <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" 
>> type="pci" multifunction="on"/>
>>     </controller>
>>     <controller type="sata" index="0">
>>       <address bus="0x00" domain="0x0000" function="0x2" slot="0x1f" 
>> type="pci"/>
>>     </controller>
>>     <rng model="virtio">
>>       <backend model="random">/dev/urandom</backend>
>>       <alias name="ua-51960005-6b95-47e9-82a7-67d5e0d6cf8a"/>
>>     </rng>
>>     <controller type="pci" model="pcie-root-port" index="6">
>>       <address bus="0x00" domain="0x0000" function="0x5" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="15">
>>       <address bus="0x00" domain="0x0000" function="0x6" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="13">
>>       <address bus="0x00" domain="0x0000" function="0x4" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="7">
>>       <address bus="0x00" domain="0x0000" function="0x6" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <graphics type="vnc" port="-1" autoport="yes" passwd="*****" 
>> passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
>>       <listen type="network" network="vdsm-ovirtmgmt"/>
>>     </graphics>
>>     <controller type="pci" model="pcie-root-port" index="16">
>>       <address bus="0x00" domain="0x0000" function="0x7" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="12">
>>       <address bus="0x00" domain="0x0000" function="0x3" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <video>
>>       <model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/>
>>       <alias name="ua-8a295e96-40c3-44de-a3b0-1c4a685a5473"/>
>>       <address bus="0x00" domain="0x0000" function="0x0" slot="0x01" 
>> type="pci"/>
>>     </video>
>>     <graphics type="spice" port="-1" autoport="yes" passwd="*****" 
>> passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
>>       <channel name="main" mode="secure"/>
>>       <channel name="inputs" mode="secure"/>
>>       <channel name="cursor" mode="secure"/>
>>       <channel name="playback" mode="secure"/>
>>       <channel name="record" mode="secure"/>
>>       <channel name="display" mode="secure"/>
>>       <channel name="smartcard" mode="secure"/>
>>       <channel name="usbredir" mode="secure"/>
>>       <listen type="network" network="vdsm-ovirtmgmt"/>
>>     </graphics>
>>     <controller type="pci" model="pcie-root-port" index="5">
>>       <address bus="0x00" domain="0x0000" function="0x4" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="usb" model="qemu-xhci" index="0" ports="8">
>>       <address bus="0x02" domain="0x0000" function="0x0" slot="0x00" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="4">
>>       <address bus="0x00" domain="0x0000" function="0x3" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="3">
>>       <address bus="0x00" domain="0x0000" function="0x2" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="11">
>>       <address bus="0x00" domain="0x0000" function="0x2" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="scsi" model="virtio-scsi" index="0">
>>       <driver iothread="1"/>
>>       <alias name="ua-d0bf6fcd-7aa2-4658-b7cc-3dac259b7ad2"/>
>>       <address bus="0x03" domain="0x0000" function="0x0" slot="0x00" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="8">
>>       <address bus="0x00" domain="0x0000" function="0x7" slot="0x02" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="14">
>>       <address bus="0x00" domain="0x0000" function="0x5" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="pci" model="pcie-root-port" index="10">
>>       <address bus="0x00" domain="0x0000" function="0x1" slot="0x03" 
>> type="pci"/>
>>     </controller>
>>     <controller type="virtio-serial" index="0" ports="16">
>>       <address bus="0x04" domain="0x0000" function="0x0" slot="0x00" 
>> type="pci"/>
>>     </controller>
>>     <channel type="spicevmc">
>>       <target type="virtio" name="com.redhat.spice.0"/>
>>     </channel>
>>     <controller type="pci" model="pcie-root"/>
>>     <interface type="bridge">
>>       <model type="virtio"/>
>>       <link state="up"/>
>>       <source bridge="vmnet"/>
>>       <alias name="ua-ceda0ef6-9139-4e5c-8840-86fe344ecbd3"/>
>>       <address bus="0x01" domain="0x0000" function="0x0" slot="0x00" 
>> type="pci"/>
>>       <mac address="56:6f:91:b9:00:05"/>
>>       <mtu size="1500"/>
>>       <filterref filter="vdsm-no-mac-spoofing"/>
>>       <bandwidth/>
>>     </interface>
>>     <disk type="file" device="cdrom" snapshot="no">
>>       <driver name="qemu" type="raw" error_policy="report"/>
>>       <source file="" startupPolicy="optional">
>>         <seclabel model="dac" type="none" relabel="no"/>
>>       </source>
>>       <target dev="sdc" bus="sata"/>
>>       <readonly/>
>>       <alias name="ua-bdf99844-2d02-411b-90bb-671ee26764cb"/>
>>       <address bus="0" controller="0" unit="2" type="drive" target="0"/>
>>     </disk>
>>     <disk snapshot="no" type="network" device="disk">
>>       <target dev="sda" bus="scsi"/>
>>       <source protocol="gluster" 
>> name="vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78">
>>         <host name="node1.fs.trashnet.xyz <http://node1.fs.trashnet.xyz/>" 
>> port="0"/>
>>         <seclabel model="dac" type="none" relabel="no"/>
>>       </source>
>>       <driver name="qemu" discard="unmap" io="native" type="qcow2" 
>> error_policy="stop" cache="none"/>
>>       <alias name="ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4"/>
>>       <address bus="0" controller="0" unit="0" type="drive" target="0"/>
>>       <boot order="1"/>
>>       <serial>a1d56b14-6d72-4f46-a0aa-eb0870c36bc4</serial>
>>     </disk>
>>     <lease>
>>       <key>df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</key>
>>       <lockspace>781717e5-1cff-43a1-b586-9941503544e8</lockspace>
>>       <target offset="6291456" 
>> path="/rhev/data-center/mnt/glusterSD/node1.fs.trashnet.xyz 
>> <http://node1.fs.trashnet.xyz/>:_vmstore/781717e5-1cff-43a1-b586-9941503544e8/dom_md/xleases"/>
>>     </lease>
>>   </devices>
>>   <pm>
>>     <suspend-to-disk enabled="no"/>
>>     <suspend-to-mem enabled="no"/>
>>   </pm>
>>   <os>
>>     <type arch="x86_64" machine="pc-q35-rhel7.6.0">hvm</type>
>>     <smbios mode="sysinfo"/>
>>   </os>
>>   <metadata>
>>     <ovirt-tune:qos/>
>>     <ovirt-vm:vm>
>>       <ovirt-vm:minGuaranteedMemoryMb 
>> type="int">512</ovirt-vm:minGuaranteedMemoryMb>
>>       <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
>>       <ovirt-vm:custom/>
>>       <ovirt-vm:device mac_address="56:6f:91:b9:00:05">
>>         <ovirt-vm:custom/>
>>       </ovirt-vm:device>
>>       <ovirt-vm:device devtype="disk" name="sda">
>>         
>> <ovirt-vm:poolID>2ffaec76-462c-11ea-b155-00163e512202</ovirt-vm:poolID>
>>         
>> <ovirt-vm:volumeID>a2314816-7970-49ce-a80c-ab0d1cf17c78</ovirt-vm:volumeID>
>>         
>> <ovirt-vm:imageID>a1d56b14-6d72-4f46-a0aa-eb0870c36bc4</ovirt-vm:imageID>
>>         
>> <ovirt-vm:domainID>781717e5-1cff-43a1-b586-9941503544e8</ovirt-vm:domainID>
>>       </ovirt-vm:device>
>>       <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
>>       <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior>
>>     </ovirt-vm:vm>
>>   </metadata>
>> </domain>
>> 
>> 2020-02-06 16:38:25,455Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
>> CreateBrokerVDSCommand, return: , log id: 1bfa03c4
>> 2020-02-06 16:38:25,494Z INFO  
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, CreateVDSCommand, 
>> return: WaitForLaunch, log id: 5e07ba66
>> 2020-02-06 16:38:25,495Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock freed to object 
>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', 
>> sharedLocks=''}'
>> 2020-02-06 16:38:25,533Z INFO  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: 
>> USER_STARTED_VM(153), VM yumcache was started by admin@internal-authz (Host: 
>> node1.ovirt.trashnet.xyz <http://node1.ovirt.trashnet.xyz/>).
>> 2020-02-06 16:38:33,300Z INFO  
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
>> (ForkJoinPool-1-worker-5) [] VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b' was 
>> reported as Down on VDS 
>> 'c3465ca2-395e-4c0c-b72e-b5b7153df452'(node1.ovirt.trashnet.xyz 
>> <http://node1.ovirt.trashnet.xyz/>)
>> 2020-02-06 16:38:33,301Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>> (ForkJoinPool-1-worker-5) [] START, DestroyVDSCommand(HostName = 
>> node1.ovirt.trashnet.xyz <http://node1.ovirt.trashnet.xyz/>, 
>> DestroyVmVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
>>  vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', secondsToWait='0', 
>> gracefully='false', reason='', ignoreNoVm='true'}), log id: 1f951ea9
>> 2020-02-06 16:38:33,478Z INFO  
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-8) [] Fetched 2 VMs from VDS 
>> 'c3465ca2-395e-4c0c-b72e-b5b7153df452'
>> 2020-02-06 16:38:33,545Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>> (ForkJoinPool-1-worker-5) [] FINISH, DestroyVDSCommand, return: , log id: 
>> 1f951ea9
>> 2020-02-06 16:38:33,546Z INFO  
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
>> (ForkJoinPool-1-worker-5) [] VM 
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) moved from 'WaitForLaunch' 
>> --> 'Down'
>> 2020-02-06 16:38:33,623Z ERROR 
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (ForkJoinPool-1-worker-5) [] EVENT_ID: VM_DOWN_ERROR(119), VM yumcache is 
>> down with error. Exit message: internal error: qemu unexpectedly closed the 
>> monitor: [2020-02-06 16:38:31.723977] E [MSGID: 108006] 
>> [afr-common.c:5323:__afr_handle_child_down_event] 0-vmstore-replicate-0: All 
>> subvolumes are down. Going offline until at least one of them comes back up.
>> [2020-02-06 16:38:31.724765] I [io-stats.c:4027:fini] 0-vmstore: io-stats 
>> translator unloaded
>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive 
>> file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native
>>  
>> <http://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native>:
>>  Could not read qcow2 header: Invalid argument.
>> 2020-02-06 16:38:33,624Z INFO  
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
>> (ForkJoinPool-1-worker-5) [] add VM 
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>> 2020-02-06 16:38:33,796Z ERROR 
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
>> (ForkJoinPool-1-worker-5) [] Rerun VM 
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS 
>> 'node1.ovirt.trashnet.xyz <http://node1.ovirt.trashnet.xyz/>'
>> 2020-02-06 16:38:33,899Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (EE-ManagedThreadFactory-engine-Thread-223) [] EVENT_ID: 
>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host 
>> node1.ovirt.trashnet.xyz <http://node1.ovirt.trashnet.xyz/>.
>> _______________________________________________
>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-le...@ovirt.org 
>> <mailto:users-le...@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>> <https://www.ovirt.org/site/privacy-policy/>
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> <https://www.ovirt.org/community/about/community-guidelines/>
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GTBANZ4R44HJE2BU55GAEBLTETUXTKT/
>>  
>> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GTBANZ4R44HJE2BU55GAEBLTETUXTKT/>
>> _______________________________________________
>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-le...@ovirt.org 
>> <mailto:users-le...@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>> <https://www.ovirt.org/site/privacy-policy/>
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> <https://www.ovirt.org/community/about/community-guidelines/>
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EZYM4OADD65RVNDIQVS25EGZOK65PWX/
>>  
>> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EZYM4OADD65RVNDIQVS25EGZOK65PWX/>
> 
> _______________________________________________
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org 
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> <https://www.ovirt.org/site/privacy-policy/>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKUVN3ZR5QOVKKCOR5XIUYYC6EL7FYXS/
>  
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKUVN3ZR5QOVKKCOR5XIUYYC6EL7FYXS/>
> 
> Ce message et toutes les pièces jointes (ci-après le “message”) sont établis 
> à l’intention exclusive de ses destinataires et sont confidentiels. Si vous 
> recevez ce message par erreur, merci de le détruire et d’en avertir 
> immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa 
> destination, toute diffusion ou toute publication, totale ou partielle, est 
> interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer 
> l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) 
> toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait 
> été modifié. IT, ES, UK.  
> <https://interactiv-group.com/disclaimer.html>_______________________________________________
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org 
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> <https://www.ovirt.org/site/privacy-policy/>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S2RYKJLRE533BZB5A3ZDDMIZ573W7HTC/
>  
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/S2RYKJLRE533BZB5A3ZDDMIZ573W7HTC/>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/23SAGPLPE5MHCGGQJXUK6D7MMO6CFWWJ/

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQACEV4HU5TROWGKFRAQK4HBNF3AHQ62/

Reply via email to