Dear Serkan,

thank you very much for your support and explanation.
I really appreciated the information you provided.

Regards,
Mauro

> Il giorno 20 set 2017, alle ore 08:26, Serkan Çoban <[email protected]> 
> ha scritto:
> 
> If you add bricks to existing volume one host could be down in each
> three host group, If you recreate the volume with one brick on each
> host, then two random hosts can be tolerated.
> Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend
> volume. If any two servers in each group goes down you loose data. If
> you chose random two host the probability you loose data will be %20
> in this case.
> If you recreate volume with s1,s2,s3,s4,s5,s6 with one brick on each
> host any random two servers can go down. If you chose random two host
> the probability you loose data will be %0 in this case.
> 
> On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici <[email protected]> wrote:
>> Dear All,
>> 
>> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume 
>> based on the following hardware:
>> 
>> - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk 
>> SAS 12Gb/s, 10GbE storage network)
>> 
>> Now, we need to add 3 new servers with the same hardware configuration 
>> respecting the current volume topology.
>> If I'm right, we will obtain a DITRIBUTED DISPERSED gluster volume with 12 
>> subvolumes, each volume will contain (4+2) bricks, that is a [12x(4+2)] 
>> volume.
>> 
>> My question is: in the current volume configuration, only 2 bricks per 
>> subvolume or one host could be down without losing data. What it will happen 
>> in the next configuration? How many hosts could be down without losing data?
>> 
>> Thank you very much.
>> Mauro Tridici
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]
>> http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to