On 05/05/17 13:49, Pranith Kumar Karampuri wrote:
On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <[email protected]
<mailto:[email protected]>> wrote:
It is the over all time, 8TB data disk healed 2x faster in 8+2
configuration.
Wow, that is counter intuitive for me. I will need to explore about this
to find out why that could be. Thanks a lot for this feedback!
Matrix multiplication for encoding/decoding of 8+2 is 4 times faster
than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8),
however each matrix operation on a 16+4 configuration takes twice the
amount of data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4.
An 8+2 also uses bigger blocks on each brick, processing the same amount
of data in less I/O operations and bigger network packets.
Probably these are the reasons why 16+4 is slower than 8+2.
See my other email for more detailed description.
Xavi
On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
<[email protected] <mailto:[email protected]>> wrote:
>
>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban
<[email protected] <mailto:[email protected]>> wrote:
>>
>> Healing gets slower as you increase m in m+n configuration.
>> We are using 16+4 configuration without any problems other then heal
>> speed.
>> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
>> 8+2 is faster by 2x.
>
>
> As you increase number of nodes that are participating in an EC
set number
> of parallel heals increase. Is the heal speed you saw improved per
file or
> the over all time it took to heal the data?
>
>>
>>
>>
>> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey
<[email protected] <mailto:[email protected]>> wrote:
>> >
>> > 8+2 and 8+3 configurations are not the limitation but just
suggestions.
>> > You can create 16+3 volume without any issue.
>> >
>> > Ashish
>> >
>> > ________________________________
>> > From: "Alastair Neil" <[email protected]
<mailto:[email protected]>>
>> > To: "gluster-users" <[email protected]
<mailto:[email protected]>>
>> > Sent: Friday, May 5, 2017 2:23:32 AM
>> > Subject: [Gluster-users] disperse volume brick counts limits in
RHES
>> >
>> >
>> > Hi
>> >
>> > we are deploying a large (24node/45brick) cluster and noted
that the
>> > RHES
>> > guidelines limit the number of data bricks in a disperse set to
8. Is
>> > there
>> > any reason for this. I am aware that you want this to be a
power of 2,
>> > but
>> > as we have a large number of nodes we were planning on going
with 16+3.
>> > Dropping to 8+2 or 8+3 will be a real waste for us.
>> >
>> > Thanks,
>> >
>> >
>> > Alastair
>> >
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > [email protected] <mailto:[email protected]>
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>> >
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > [email protected] <mailto:[email protected]>
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected] <mailto:[email protected]>
>> http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>
>
>
>
> --
> Pranith
--
Pranith
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users