I enabled libgfapi and powered off / on the VM.

- engine-config --all
- LibgfApiSupported: true version: 4.3

How can I see that this is active on the VM? The disk looks the same like 
before.

- virsh dumpxml 15
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' 
io='threads'/>
      <source 
file='/rhev/data-center/mnt/glusterSD/10.9.9.101:_vmstore/f2c621de-42bf-4dbf-920c-adf4506b786d/images/1e231e3e-d98c-491a-9236-907814d4837/c755aaa3-7d3d-4c0d-8184-c6aae37229ba'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>

Here is the Volume setup:

Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 195e2a05-9667-4b8b-b0b7-82294631de50
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.9.9.101:/gluster_bricks/vmstore/vmstore
Brick2: 10.9.9.102:/gluster_bricks/vmstore/vmstore
Brick3: 10.9.9.103:/gluster_bricks/vmstore/vmstore
Brick4: 10.9.9.101:/gluster_bricks/S4CYNF0M219849L/S4CYNF0M219849L
Brick5: 10.9.9.102:/gluster_bricks/S4CYNF0M219836L/S4CYNF0M219836L
Brick6: 10.9.9.103:/gluster_bricks/S4CYNF0M219801Y/S4CYNF0M219801Y
Options Reconfigured:
performance.write-behind-window-size: 64MB
performance.flush-behind: on
performance.stat-prefetch: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: on
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enable

Thank you for your support.
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/Z3JD7MQV2PIQZJSMW6NKPL4W7JLBGPKN/

Reply via email to