The current set is:
[root@ovirt-node2 ~]# gluster volume get glen cluster.choose-local| awk 
'/choose-local/ {print $2}'
[root@ovirt-node2 ~]# gluster volume get gv0 cluster.choose-local| awk 
'/choose-local/ {print $2}'
[root@ovirt-node2 ~]# gluster volume get gv1 cluster.choose-local| awk 
'/choose-local/ {print $2}'

Is stated in the "virt" group: 

I set the cluster.choose-local to true on every gluster volume and started 
migrating  Hosted Engine around... a bunch of vms freezed and after a while 
also the Hosted-Engine hung....

To complete the environment, here it is the complete set for the glen (Hosted 
-Engine volume) gv0 and gv1 (volumes used by VMs):

[root@ovirt-node3 ~]# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 863221f4-e11c-4589-95e9-aa3948e177f5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Brick1: ovirt-node2.ovirt:/brickgv1/gv1
Brick2: ovirt-node3.ovirt:/brickgv1/gv1
Brick3: ovirt-node4.ovirt:/dati/gv1 (arbiter)
Options Reconfigured: off
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20 30
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: true
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32 off off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: on
Users mailing list --
To unsubscribe send an email to
Privacy Statement:
oVirt Code of Conduct:
List Archives:

Reply via email to