Hello,
I have a 3 node ovirt 4.3 cluster that I've setup and using gluster 
(Hyperconverged setup)
I need to increase the amount of storage and compute so I added a 4th host 
(server4.example.com) if it is possible to expand the amount of bricks 
(storage) in the "data" volume?

I did some research and from that research the following caught my eye: old 
post 
"https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e";
So the question is, if taking that approach feasible? , is it even possible an 
oVirt point of view?



---------------------------
My gluster volume:
---------------------------
Volume Name: data
Type: Replicate
Volume ID: 003ffea0-b441-43cb-a38f-ccdf6ffb77f8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: server1.example1.com:/gluster_bricks/data/data
Brick2: server2.example.com:/gluster_bricks/data/data
Brick3: server3.example.com:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Thanks.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKPAM65CSDJ7LQTZTAUQSBDOFDZM7RQS/

Reply via email to