While creating volume just provide bricks which are hosted on different 
servers. 

gluster v create <voluem name> redundancy 2 server-1:/brick1 server-2:/brick2 
server-3:/brick3 server-4:/brick4 server-5:/brick5 server-6:/brick6 

At present you can not differentiate between data bricks and parity bricks. 
That is , in above command you can not say which bricks out of brick 1 to 
brick6 would be parity brick. 

----- Original Message -----

From: "Gandalf Corvotempesta" <gandalf.corvotempe...@gmail.com> 
To: "Ashish Pandey" <aspan...@redhat.com> 
Cc: gluster-users@gluster.org 
Sent: Friday, March 31, 2017 12:19:58 PM 
Subject: Re: [Gluster-users] Node count constraints with EC? 

How can I ensure that each parity brick is stored on a different server ? 

Il 30 mar 2017 6:50 AM, "Ashish Pandey" < aspan...@redhat.com > ha scritto: 



Hi Terry, 

There is not constraint on number of nodes for erasure coded volumes. 
However, there are some suggestions to keep in mind. 

If you have 4+2 configuration, that means you can loose maximum 2 bricks at a 
time without loosing your volume for IO. 
These bricks may fail because of node crash or node disconnection. That is why 
it is always good to have all the 6 bricks on 6 different nodes. If you have 3 
bricks on one node and this nodes goes down then you 
will loose the volume and it will be inaccessible. 
So just keep in mind that you should not loose more than redundancy bricks even 
if any one node goes down. 

---- 
Ashish 



From: "Terry McGuire" < tmcgu...@ualberta.ca > 
To: gluster-users@gluster.org 
Sent: Wednesday, March 29, 2017 11:59:32 PM 
Subject: [Gluster-users] Node count constraints with EC? 

Hello list. Newbie question: I’m building a low-performance/low-cost storage 
service with a starting size of about 500TB, and want to use Gluster with 
erasure coding. I’m considering subvolumes of maybe 4+2, or 8+3 or 4. I was 
thinking I’d spread these over 4 nodes, and add single nodes over time, with 
subvolumes rearranged over new nodes to maintain protection from whole node 
failures. 

However, reading through some RedHat-provided documentation, they seem to 
suggest that node counts should be a multiple of 3, 6 or 12, depending on 
subvolume config. Is this actually a requirement, or is it only a suggestion 
for best performance or something? 

Can anyone comment on node count constraints with erasure coded subvolumes? 

Thanks in advance for anyone’s reply, 
Terry 

_____________________________ 
Terry McGuire 
Information Services and Technology (IST) 
University of Alberta 
Edmonton, Alberta, Canada T6G 2H1 
Phone: 780-492-9422 


_______________________________________________ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 


_______________________________________________ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 




_______________________________________________ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to