Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-25 Thread FERNANDO FREDIANI

RAID 6 doesn't make exactly 3 copies of data.

I think storage is too expensive when compared to the total cost of the 
platform that 3 copies is waste of storage or luxury, given than if you 
have a permanent failure you still can make a new 2nd copy of the data 
provided you have storage left for that.



On 25/04/2017 10:26, Donny Davis wrote:
I personally want three copies of my data, more akin to RAID 6(ish) so 
in my case replica 3 makes perfect sense.


On Mon, Apr 24, 2017 at 11:34 AM, Denis Chaplygin > wrote:


Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI
> wrote:

Hi Denis, understood.
What if in the case of adding a fourth host to the running
cluster, will the copy of data be kept only twice in any of
the 4 servers ?


replica volumes can be build only from 2 or 3 bricks. There is no
way to make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

|gluster volume create test-volume replica 2 transport tcp
server1:/b1 server2:/b2 server3:/b3 server4:/b4|

test-volume would be like a RAID10 - you will have two replica
volumes b1+b2 and b3+b4 combined into a single distributed volume.
In that case you will
have only two copies of your data. Part of your data will be
stored twice on b1 and b2 and another one part will be stored
twice at b3 and b4
You will be able to extend that distributed volume by adding new
replicas.


[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-25 Thread Donny Davis
I personally want three copies of my data, more akin to RAID 6(ish) so in
my case replica 3 makes perfect sense.

On Mon, Apr 24, 2017 at 11:34 AM, Denis Chaplygin 
wrote:

> Hello!
>
> On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Hi Denis, understood.
>> What if in the case of adding a fourth host to the running cluster, will
>> the copy of data be kept only twice in any of the 4 servers ?
>>
>
> replica volumes can be build only from 2 or 3 bricks. There is no way to
> make a replica volume from a 4 bricks.
>
> But you may combine distributed volumes and replica volumes [1]:
>
> gluster volume create test-volume replica 2 transport tcp server1:/b1 
> server2:/b2 server3:/b3 server4:/b4
>
> test-volume would be like a RAID10 - you will have two replica volumes
> b1+b2 and b3+b4 combined into a single distributed volume. In that case you
> will
> have only two copies of your data. Part of your data will be stored twice
> on b1 and b2 and another one part will be stored twice at b3 and b4
> You will be able to extend that distributed volume by adding new replicas.
>
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 5:08 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hi Denis, understood.
> What if in the case of adding a fourth host to the running cluster, will
> the copy of data be kept only twice in any of the 4 servers ?
>

replica volumes can be build only from 2 or 3 bricks. There is no way to
make a replica volume from a 4 bricks.

But you may combine distributed volumes and replica volumes [1]:

gluster volume create test-volume replica 2 transport tcp server1:/b1
server2:/b2 server3:/b3 server4:/b4

test-volume would be like a RAID10 - you will have two replica volumes
b1+b2 and b3+b4 combined into a single distributed volume. In that case you
will
have only two copies of your data. Part of your data will be stored twice
on b1 and b2 and another one part will be stored twice at b3 and b4
You will be able to extend that distributed volume by adding new replicas.


[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Ok, great, thanks for the clarification.

Therefore a replica 3 configuration means raw storage space cost is 
'similar' to a RAID 1 and actual data exists only 2 times and two 
different servers.


Regards
Fernando


On 24/04/2017 11:35, Denis Chaplygin wrote:
With arbiter volume you still have a replica 3 volume, meaning that 
you have three participants in your quorum. But only two of those 
participants keep the actual data. Third one, the arbiter, stores only 
some metadata, not the files content, so data is not replicated 3 times.


On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI 
> wrote:


But then quorum doesn't replicate data 3 times, does it ?

Fernando


On 24/04/2017 10:24, Denis Chaplygin wrote:

Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI
> wrote:

Out of curiosity, why do you and people in general use more
replica 3 than replica 2 ?


The answer is simple - quorum. With just two participants you
don't know what to do, when your peer is unreachable. When you
have three participants, you are able to establish a majority. In
that case, when two partiticipants are able to communicate, they
now, that lesser part of cluster knows, that it should not accept
any changes.

If I understand correctly this seems overkill and waste of
storage as 2 copies of data (replica 2)  seems pretty
reasonable similar to RAID 1 and still in the worst case the
data can be replicated after a fail. I see that replica 3
helps more on performance at the cost of space.


You are absolutely right. You need two copies of data to provide
data redundancy and you need three (or more) members in cluster
to provide distinguishable majority. Therefore we have arbiter
volumes, thus solving that issue [1].

[1]

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
With arbiter volume you still have a replica 3 volume, meaning that you
have three participants in your quorum. But only two of those participants
keep the actual data. Third one, the arbiter, stores only some metadata,
not the files content, so data is not replicated 3 times.

On Mon, Apr 24, 2017 at 3:33 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> But then quorum doesn't replicate data 3 times, does it ?
>
> Fernando
>
> On 24/04/2017 10:24, Denis Chaplygin wrote:
>
> Hello!
>
> On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Out of curiosity, why do you and people in general use more replica 3
>> than replica 2 ?
>>
>
> The answer is simple - quorum. With just two participants you don't know
> what to do, when your peer is unreachable. When you have three
> participants, you are able to establish a majority. In that case, when two
> partiticipants are able to communicate, they now, that lesser part of
> cluster knows, that it should not accept any changes.
>
>
>> If I understand correctly this seems overkill and waste of storage as 2
>> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
>> still in the worst case the data can be replicated after a fail. I see that
>> replica 3 helps more on performance at the cost of space.
>>
>> You are absolutely right. You need two copies of data to provide data
> redundancy and you need three (or more) members in cluster to provide
> distinguishable majority. Therefore we have arbiter volumes, thus solving
> that issue [1].
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/arbiter-volumes-and-quorum/
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Denis Chaplygin
Hello!

On Mon, Apr 24, 2017 at 3:02 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Out of curiosity, why do you and people in general use more replica 3 than
> replica 2 ?
>

The answer is simple - quorum. With just two participants you don't know
what to do, when your peer is unreachable. When you have three
participants, you are able to establish a majority. In that case, when two
partiticipants are able to communicate, they now, that lesser part of
cluster knows, that it should not accept any changes.


> If I understand correctly this seems overkill and waste of storage as 2
> copies of data (replica 2)  seems pretty reasonable similar to RAID 1 and
> still in the worst case the data can be replicated after a fail. I see that
> replica 3 helps more on performance at the cost of space.
>
> You are absolutely right. You need two copies of data to provide data
redundancy and you need three (or more) members in cluster to provide
distinguishable majority. Therefore we have arbiter volumes, thus solving
that issue [1].

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread FERNANDO FREDIANI

Hello.

Out of curiosity, why do you and people in general use more replica 3 
than replica 2 ?


If I understand correctly this seems overkill and waste of storage as 2 
copies of data (replica 2)  seems pretty reasonable similar to RAID 1 
and still in the worst case the data can be replicated after a fail. I 
see that replica 3 helps more on performance at the cost of space.


Fernando


On 24/04/2017 08:33, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:36 PM, Sven Achtelik wrote:


Hi Kasturi,

I’ll try that. Will this be persistent over a reboot of a host or even 
stopping of the complete cluster ?


Thank you


Hi Sven,

This is a volume set option ((has nothing to do with reboot)and it 
will be present on the volume until you reset it manually using 'gluster 
volume reset' command . You just need to execute 'gluster volume heal 
 granular-entry-heal enable' and this will do the right thing 
for you.


Thanks
kasturi.


*Von:*knarra [mailto:kna...@redhat.com]
*Gesendet:* Montag, 24. April 2017 13:44
*An:* Sven Achtelik <sven.achte...@eps.aero>; users@ovirt.org
*Betreff:* Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:

Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always
try to stay on the current version and I’m applying
updates/upgrade if there are any. For this I put a host in
maintenance and also use the “Stop Gluster Service”  checkbox.
After it’s done updating I’ll set it back to active and wait until
the engine sees all bricks again and then I’ll go for the next host.

This worked fine for me the last month and now that I have more
and more VMs running the changes that are written to the gluster
volume while a host is in maintenance become a lot more and it
takes pretty long for the healing to complete. What I don’t
understand is that I don’t really see a lot of network usage in
the GUI during that time and it feels quiet slow. The Network for
the gluster is a 10G and I’m quiet happy with the performance of
it, it’s just the healing that takes long. I noticed that because
I couldn’t update the third host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during
healing that needs to be configured ? Or should I maybe change my
updating process somehow to avoid having so many changes in queue?

Thank you,

Sven



___

Users mailing list

Users@ovirt.org <mailto:Users@ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

Do you have granular entry heal enabled on the volume? If no, 
there is a feature called granular entry self-heal which should be 
enabled with sharded volumes to get the benefits. So when a brick goes 
down and say only 1 in those million entries is created/deleted. 
Self-heal would be done for only that file it won't crawl the entire 
directory.


You can run|gluster volume set|/VOLNAME/|cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If 
the volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal||enable / disable|command to enable or 
disable granular-entry-heal option.


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread Sven Achtelik
Hi Kasturi,

I'll try that. Will this be persistent over a reboot of a host or even stopping 
of the complete cluster ?


Thank you
Von: knarra [mailto:kna...@redhat.com]
Gesendet: Montag, 24. April 2017 13:44
An: Sven Achtelik <sven.achte...@eps.aero>; users@ovirt.org
Betreff: Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:
Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to stay on 
the current version and I'm applying updates/upgrade if there are any. For this 
I put a host in maintenance and also use the "Stop Gluster Service"  checkbox. 
After it's done updating I'll set it back to active and wait until the engine 
sees all bricks again and then I'll go for the next host.

This worked fine for me the last month and now that I have more and more VMs 
running the changes that are written to the gluster volume while a host is in 
maintenance become a lot more and it takes pretty long for the healing to 
complete. What I don't understand is that I don't really see a lot of network 
usage in the GUI during that time and it feels quiet slow. The Network for the 
gluster is a 10G and I'm quiet happy with the performance of it, it's just the 
healing that takes long. I noticed that because I couldn't update the third 
host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during healing that 
needs to be configured ? Or should I maybe change my updating process somehow 
to avoid having so many changes in queue?

Thank you,

Sven




___

Users mailing list

Users@ovirt.org<mailto:Users@ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there is a 
feature called granular entry self-heal which should be enabled with sharded 
volumes to get the benefits. So when a brick goes down and say only 1 in those 
million entries is created/deleted. Self-heal would be done for only that file 
it won't crawl the entire directory.

You can run gluster volume set VOLNAME cluster.granular-entry-heal enable / 
disable command only if the volume is in Created state. If the volume is in any 
other state other than Created , for example, Started , Stopped, and so on, 
execute gluster volume heal VOLNAME granular-entry-heal enable / disable 
command to enable or disable granular-entry-heal option.

Thanks

kasturi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:03 PM, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there 
is a feature called granular entry self-heal which should be enabled 
with sharded volumes to get the benefits. So when a brick goes down and 
say only 1 in those million entries is created/deleted. Self-heal would 
be done for only that file it won't crawl the entire directory.


You can run|gluster volume set/VOLNAME/cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If the 
volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal|enable / disable||command to enable or 
disable granular-entry-heal option.


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users