On Thu, Feb 21, 2019 at 8:47 PM <[email protected]> wrote: > > Hello, > I have a 3 node ovirt 4.3 cluster that I've setup and using gluster > (Hyperconverged setup) > I need to increase the amount of storage and compute so I added a 4th host > (server4.example.com) if it is possible to expand the amount of bricks > (storage) in the "data" volume? > > I did some research and from that research the following caught my eye: old > post > "https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e" > So the question is, if taking that approach feasible? , is it even possible > an oVirt point of view? >
Expanding by 1 node is only possible if you have sufficient space on the existing nodes in order to create a replica 2 + arbiter volume. The post talks of how you can create your volumes in a way that you can move the bricks around so that the each of the bricks in a replica set reside on a separate server. We do not have an automatic way to provision and rebalance the bricks yet in order to expand by 1 node. > > > --------------------------- > My gluster volume: > --------------------------- > Volume Name: data > Type: Replicate > Volume ID: 003ffea0-b441-43cb-a38f-ccdf6ffb77f8 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: server1.example1.com:/gluster_bricks/data/data > Brick2: server2.example.com:/gluster_bricks/data/data > Brick3: server3.example.com:/gluster_bricks/data/data (arbiter) > Options Reconfigured: > cluster.granular-entry-heal: enable > performance.strict-o-direct: on > network.ping-timeout: 30 > storage.owner-gid: 36 > storage.owner-uid: 36 > cluster.choose-local: off > user.cifs: off > features.shard: on > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 8 > cluster.locking-scheme: granular > cluster.data-self-heal-algorithm: full > cluster.server-quorum-type: server > cluster.quorum-type: auto > cluster.eager-lock: enable > network.remote-dio: enable > performance.low-prio-threads: 32 > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > nfs.disable: on > performance.client-io-threads: off > > > Thanks. > _______________________________________________ > Users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/[email protected]/message/HKPAM65CSDJ7LQTZTAUQSBDOFDZM7RQS/ _______________________________________________ Users mailing list -- [email protected] To unsubscribe send an email to [email protected] Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/4NNGYOLCFJ4SN2QEO34TSWYGSKBVGEL2/

