[ovirt-users] Re: Gluster VM image Resync Time

2019-05-18 Thread Indivar Nair
Thanks, Strahil,

Sorry, got busy with some other work.
But, better late than never.

Regards,


Indivar Nair

On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:

> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not always
> completely overwritten and only the shards that are different will be
> healed.
>
> Maybe you should change the default shard size.
>
> Best Regards,
> Strahil Nikolov
> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>
> Hi All,
>
> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> We would have around 50 - 60 VMs, with an average 500GB disk size.
>
> Now in case one of the Gluster Nodes go completely out of sync, roughly,
> how long would it take to resync? (as per your experience)
> Will it impact the working of VMs in any way?
> Is there anything to be taken care of, in advance, to prepare for such a
> situation?
>
> Regards,
>
>
> Indivar Nair
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E6DX33SHVMEBBRMUBI2YDHTIX4ANZ4WK/


[ovirt-users] Re: Gluster VM image Resync Time

2019-05-18 Thread Indivar Nair
Thanks, Krutika,

Sorry, got busy with some other work.
But, better late than never.

Regards,


Indivar Nair

On Thu, Mar 28, 2019 at 12:26 PM Krutika Dhananjay 
wrote:

> Right. So Gluster stores what are called "indices" for each modified file
> (or shard)
> under a special hidden directory of the "good" bricks at
> $BRICK_PATH/.glusterfs/indices/xattrop.
> When the offline brick comes back up, the file corresponding to each index
> is healed, and then the index deleted
> to mark the fact that the file has been healed.
>
> You can try this and see it for yourself. Just create a 1x3 plain
> replicate volume, and enable shard on it.
> Create a big file (big enough to have multiple shards). Check that the
> shards are created under $BRICK_PATH/.shard.
> Now kill a brick. Modify a small portion of the file. Hit `ls` on
> $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
> You'll notice there will be entries named after the gfid (unique
> identifier in gluster for each file) of the shards.
> And only for those shards that the write modified, and not ALL shards of
> this really big file.
> And then when you bring the brick back up using `gluster volume start $VOL
> force`, the
> shards get healed and the directory eventually becomes empty.
>
> -Krutika
>
>
> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair 
> wrote:
>
>> Hi Krutika,
>>
>> So how does the Gluster node know which shards were modified after it
>> went down?
>> Do the other Gluster nodes keep track of it?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
>> wrote:
>>
>>> Each shard is a separate file of size equal to value of
>>> "features.shard-block-size".
>>> So when a brick/node was down, only those shards belonging to the VM
>>> that were modified will be sync'd later when the brick's back up.
>>> Does that answer your question?
>>>
>>> -Krutika
>>>
>>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>>>
 On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
 wrote:
 >
 > Hi Strahil,
 >
 > Ok. Looks like sharding should make the resyncs faster.
 >
 > I searched for more info on it, but couldn't find much.
 > I believe it will still have to compare each shard to determine
 whether there are any changes that need to be replicated.
 > Am I right?

 +Krutika Dhananjay
 >
 > Regards,
 >
 > Indivar Nair
 >
 >
 >
 > On Wed, Mar 27, 2019 at 4:34 PM Strahil 
 wrote:
 >>
 >> By default ovirt uses 'sharding' which splits the files into logical
 chunks. This greatly reduces healing time, as VM's disk is not always
 completely overwritten and only the shards that are different will be
 healed.
 >>
 >> Maybe you should change the default shard size.
 >>
 >> Best Regards,
 >> Strahil Nikolov
 >>
 >> On Mar 27, 2019 08:24, Indivar Nair 
 wrote:
 >>
 >> Hi All,
 >>
 >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
 >> We would have around 50 - 60 VMs, with an average 500GB disk size.
 >>
 >> Now in case one of the Gluster Nodes go completely out of sync,
 roughly, how long would it take to resync? (as per your experience)
 >> Will it impact the working of VMs in any way?
 >> Is there anything to be taken care of, in advance, to prepare for
 such a situation?
 >>
 >> Regards,
 >>
 >>
 >> Indivar Nair
 >>
 > ___
 > Users mailing list -- users@ovirt.org
 > To unsubscribe send an email to users-le...@ovirt.org
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/

>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HU6ACV2QJVNGMWJHAVFJ63Y67GB3NPQU/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-28 Thread Krutika Dhananjay
On Thu, Mar 28, 2019 at 2:28 PM Krutika Dhananjay 
wrote:

> Gluster 5.x does have two important performance-related fixes that are not
> part of 3.12.x -
> i.  in shard-replicate interaction -
> https://bugzilla.redhat.com/show_bug.cgi?id=1635972
>

Sorry, wrong bug-id. This should be
https://bugzilla.redhat.com/show_bug.cgi?id=1549606.

-Krutika

ii. in qemu-gluster-fuse interaction -
> https://bugzilla.redhat.com/show_bug.cgi?id=1635980
>
> The two fixes do improve write performance in vm-storage workload. Do let
> us know your experience if you happen to move to gluster-5.x.
>
> -Krutika
>
>
>
> On Thu, Mar 28, 2019 at 1:13 PM Strahil  wrote:
>
>> Hi Krutika,
>>
>> I have noticed some performance penalties  (10%-15%) when using sharing
>> in v3.12  .
>> What is the situation now with 5.5 ?
>> Best Regards,
>> Strahil Nikolov
>> On Mar 28, 2019 08:56, Krutika Dhananjay  wrote:
>>
>> Right. So Gluster stores what are called "indices" for each modified file
>> (or shard)
>> under a special hidden directory of the "good" bricks at
>> $BRICK_PATH/.glusterfs/indices/xattrop.
>> When the offline brick comes back up, the file corresponding to each
>> index is healed, and then the index deleted
>> to mark the fact that the file has been healed.
>>
>> You can try this and see it for yourself. Just create a 1x3 plain
>> replicate volume, and enable shard on it.
>> Create a big file (big enough to have multiple shards). Check that the
>> shards are created under $BRICK_PATH/.shard.
>> Now kill a brick. Modify a small portion of the file. Hit `ls` on
>> $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
>> You'll notice there will be entries named after the gfid (unique
>> identifier in gluster for each file) of the shards.
>> And only for those shards that the write modified, and not ALL shards of
>> this really big file.
>> And then when you bring the brick back up using `gluster volume start
>> $VOL force`, the
>> shards get healed and the directory eventually becomes empty.
>>
>> -Krutika
>>
>>
>> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair 
>> wrote:
>>
>> Hi Krutika,
>>
>> So how does the Gluster node know which shards were modified after it
>> went down?
>> Do the other Gluster nodes keep track of it?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
>> wrote:
>>
>> Each shard is a separate file of size equal to value of
>> "features.shard-block-size".
>> So when a brick/node was down, only those shards belonging to the VM that
>> were modified will be sync'd later when the brick's back up.
>> Does that answer your question?
>>
>> -Krutika
>>
>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>>
>> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
>> wrote:
>> >
>> > Hi Strahil,
>> >
>> > Ok. Looks like sharding should make the resyncs faster.
>> >
>> > I searched for more info on it, but couldn't find much.
>> > I believe it will still have to compare each shard to determine whether
>> there are any changes that need to be replicated.
>> > Am I right?
>>
>> +Krutika Dhananjay
>> >
>> > Regards,
>> >
>> > Indivar Nair
>> >
>> >
>> >
>> > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
>> >>
>> >> By default ovirt uses 'sharding' which splits the files into logical
>> chunks. This greatly reduces healing time, as VM's disk is not always
>> completely overwritten and only the shards that are different will be
>> healed.
>> >>
>> >> Maybe you should change the default shard size.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>> >>
>> >> Hi All,
>> >>
>> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
>> >>
>> >> Now in case one of the Gluster Nodes go completely out of sync,
>> roughly, how long would it take to resync? (as per your experience)
>> >> Will it impact the working of VMs in any way?
>> >> Is there anything to be taken care of, in advance, to prepare for such
>> a situation?
>> >>
>> >> Regards,
>> >>
>> >>
>> >> Indivar Nair
>> >>
>> > __
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H46WHUIHFJ5EZPR4MX27WEGDM264VTGQ/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-28 Thread Krutika Dhananjay
Gluster 5.x does have two important performance-related fixes that are not
part of 3.12.x -
i.  in shard-replicate interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635972
ii. in qemu-gluster-fuse interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635980

The two fixes do improve write performance in vm-storage workload. Do let
us know your experience if you happen to move to gluster-5.x.

-Krutika



On Thu, Mar 28, 2019 at 1:13 PM Strahil  wrote:

> Hi Krutika,
>
> I have noticed some performance penalties  (10%-15%) when using sharing in
> v3.12  .
> What is the situation now with 5.5 ?
> Best Regards,
> Strahil Nikolov
> On Mar 28, 2019 08:56, Krutika Dhananjay  wrote:
>
> Right. So Gluster stores what are called "indices" for each modified file
> (or shard)
> under a special hidden directory of the "good" bricks at
> $BRICK_PATH/.glusterfs/indices/xattrop.
> When the offline brick comes back up, the file corresponding to each index
> is healed, and then the index deleted
> to mark the fact that the file has been healed.
>
> You can try this and see it for yourself. Just create a 1x3 plain
> replicate volume, and enable shard on it.
> Create a big file (big enough to have multiple shards). Check that the
> shards are created under $BRICK_PATH/.shard.
> Now kill a brick. Modify a small portion of the file. Hit `ls` on
> $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
> You'll notice there will be entries named after the gfid (unique
> identifier in gluster for each file) of the shards.
> And only for those shards that the write modified, and not ALL shards of
> this really big file.
> And then when you bring the brick back up using `gluster volume start $VOL
> force`, the
> shards get healed and the directory eventually becomes empty.
>
> -Krutika
>
>
> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair 
> wrote:
>
> Hi Krutika,
>
> So how does the Gluster node know which shards were modified after it went
> down?
> Do the other Gluster nodes keep track of it?
>
> Regards,
>
>
> Indivar Nair
>
>
> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
> wrote:
>
> Each shard is a separate file of size equal to value of
> "features.shard-block-size".
> So when a brick/node was down, only those shards belonging to the VM that
> were modified will be sync'd later when the brick's back up.
> Does that answer your question?
>
> -Krutika
>
> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>
> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
> wrote:
> >
> > Hi Strahil,
> >
> > Ok. Looks like sharding should make the resyncs faster.
> >
> > I searched for more info on it, but couldn't find much.
> > I believe it will still have to compare each shard to determine whether
> there are any changes that need to be replicated.
> > Am I right?
>
> +Krutika Dhananjay
> >
> > Regards,
> >
> > Indivar Nair
> >
> >
> >
> > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
> >>
> >> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not always
> completely overwritten and only the shards that are different will be
> healed.
> >>
> >> Maybe you should change the default shard size.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> On Mar 27, 2019 08:24, Indivar Nair  wrote:
> >>
> >> Hi All,
> >>
> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
> >>
> >> Now in case one of the Gluster Nodes go completely out of sync,
> roughly, how long would it take to resync? (as per your experience)
> >> Will it impact the working of VMs in any way?
> >> Is there anything to be taken care of, in advance, to prepare for such
> a situation?
> >>
> >> Regards,
> >>
> >>
> >> Indivar Nair
> >>
> > __
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4ZZLTQ6BORAC2YETOQLBJSKF6JVVDRT/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-28 Thread Strahil
Hi Krutika,

I have noticed some performance penalties  (10%-15%) when using sharing in 
v3.12  .
What is the situation now with 5.5 ?
Best Regards,
Strahil NikolovOn Mar 28, 2019 08:56, Krutika Dhananjay  
wrote:
>
> Right. So Gluster stores what are called "indices" for each modified file (or 
> shard) 
> under a special hidden directory of the "good" bricks at 
> $BRICK_PATH/.glusterfs/indices/xattrop.
> When the offline brick comes back up, the file corresponding to each index is 
> healed, and then the index deleted
> to mark the fact that the file has been healed.
>
> You can try this and see it for yourself. Just create a 1x3 plain replicate 
> volume, and enable shard on it.
> Create a big file (big enough to have multiple shards). Check that the shards 
> are created under $BRICK_PATH/.shard.
> Now kill a brick. Modify a small portion of the file. Hit `ls` on 
> $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
> You'll notice there will be entries named after the gfid (unique identifier 
> in gluster for each file) of the shards.
> And only for those shards that the write modified, and not ALL shards of this 
> really big file.
> And then when you bring the brick back up using `gluster volume start $VOL 
> force`, the
> shards get healed and the directory eventually becomes empty.
>
> -Krutika
>
>
> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair  
> wrote:
>>
>> Hi Krutika,
>>
>> So how does the Gluster node know which shards were modified after it went 
>> down?
>> Do the other Gluster nodes keep track of it?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay  
>> wrote:
>>>
>>> Each shard is a separate file of size equal to value of 
>>> "features.shard-block-size".
>>> So when a brick/node was down, only those shards belonging to the VM that 
>>> were modified will be sync'd later when the brick's back up.
>>> Does that answer your question?
>>>
>>> -Krutika
>>>
>>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:

 On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair  
 wrote:
 >
 > Hi Strahil,
 >
 > Ok. Looks like sharding should make the resyncs faster.
 >
 > I searched for more info on it, but couldn't find much.
 > I believe it will still have to compare each shard to determine whether 
 > there are any changes that need to be replicated.
 > Am I right?

 +Krutika Dhananjay
 >
 > Regards,
 >
 > Indivar Nair
 >
 >
 >
 > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
 >>
 >> By default ovirt uses 'sharding' which splits the files into logical 
 >> chunks. This greatly reduces healing time, as VM's disk is not always 
 >> completely overwritten and only the shards that are different will be 
 >> healed.
 >>
 >> Maybe you should change the default shard size.
 >>
 >> Best Regards,
 >> Strahil Nikolov
 >>
 >> On Mar 27, 2019 08:24, Indivar Nair  wrote:
 >>
 >> Hi All,
 >>
 >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
 >> We would have around 50 - 60 VMs, with an average 500GB disk size.
 >>
 >> Now in case one of the Gluster Nodes go completely out of sync, 
 >> roughly, how long would it take to resync? (as per your experience)
 >> Will it impact the working of VMs in any way?
 >> Is there anything to be taken care of, in advance, to prepare for such 
 >> a situation?
 >>
 >> Regards,
 >>
 >>
 >> Indivar Nair
 >>
 > _
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RJEPBXW2CTKZIQB5X4PJKMGRQTJ7SU6Q/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-28 Thread Krutika Dhananjay
Right. So Gluster stores what are called "indices" for each modified file
(or shard)
under a special hidden directory of the "good" bricks at
$BRICK_PATH/.glusterfs/indices/xattrop.
When the offline brick comes back up, the file corresponding to each index
is healed, and then the index deleted
to mark the fact that the file has been healed.

You can try this and see it for yourself. Just create a 1x3 plain replicate
volume, and enable shard on it.
Create a big file (big enough to have multiple shards). Check that the
shards are created under $BRICK_PATH/.shard.
Now kill a brick. Modify a small portion of the file. Hit `ls` on
$BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
You'll notice there will be entries named after the gfid (unique identifier
in gluster for each file) of the shards.
And only for those shards that the write modified, and not ALL shards of
this really big file.
And then when you bring the brick back up using `gluster volume start $VOL
force`, the
shards get healed and the directory eventually becomes empty.

-Krutika


On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair 
wrote:

> Hi Krutika,
>
> So how does the Gluster node know which shards were modified after it went
> down?
> Do the other Gluster nodes keep track of it?
>
> Regards,
>
>
> Indivar Nair
>
>
> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
> wrote:
>
>> Each shard is a separate file of size equal to value of
>> "features.shard-block-size".
>> So when a brick/node was down, only those shards belonging to the VM that
>> were modified will be sync'd later when the brick's back up.
>> Does that answer your question?
>>
>> -Krutika
>>
>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>>
>>> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
>>> wrote:
>>> >
>>> > Hi Strahil,
>>> >
>>> > Ok. Looks like sharding should make the resyncs faster.
>>> >
>>> > I searched for more info on it, but couldn't find much.
>>> > I believe it will still have to compare each shard to determine
>>> whether there are any changes that need to be replicated.
>>> > Am I right?
>>>
>>> +Krutika Dhananjay
>>> >
>>> > Regards,
>>> >
>>> > Indivar Nair
>>> >
>>> >
>>> >
>>> > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
>>> >>
>>> >> By default ovirt uses 'sharding' which splits the files into logical
>>> chunks. This greatly reduces healing time, as VM's disk is not always
>>> completely overwritten and only the shards that are different will be
>>> healed.
>>> >>
>>> >> Maybe you should change the default shard size.
>>> >>
>>> >> Best Regards,
>>> >> Strahil Nikolov
>>> >>
>>> >> On Mar 27, 2019 08:24, Indivar Nair 
>>> wrote:
>>> >>
>>> >> Hi All,
>>> >>
>>> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>>> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
>>> >>
>>> >> Now in case one of the Gluster Nodes go completely out of sync,
>>> roughly, how long would it take to resync? (as per your experience)
>>> >> Will it impact the working of VMs in any way?
>>> >> Is there anything to be taken care of, in advance, to prepare for
>>> such a situation?
>>> >>
>>> >> Regards,
>>> >>
>>> >>
>>> >> Indivar Nair
>>> >>
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTCTLHSFEQF5ZVXZHHXP7QUPAEUEEZF6/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-28 Thread Indivar Nair
Hi Krutika,

So how does the Gluster node know which shards were modified after it went
down?
Do the other Gluster nodes keep track of it?

Regards,


Indivar Nair


On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
wrote:

> Each shard is a separate file of size equal to value of
> "features.shard-block-size".
> So when a brick/node was down, only those shards belonging to the VM that
> were modified will be sync'd later when the brick's back up.
> Does that answer your question?
>
> -Krutika
>
> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>
>> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
>> wrote:
>> >
>> > Hi Strahil,
>> >
>> > Ok. Looks like sharding should make the resyncs faster.
>> >
>> > I searched for more info on it, but couldn't find much.
>> > I believe it will still have to compare each shard to determine whether
>> there are any changes that need to be replicated.
>> > Am I right?
>>
>> +Krutika Dhananjay
>> >
>> > Regards,
>> >
>> > Indivar Nair
>> >
>> >
>> >
>> > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
>> >>
>> >> By default ovirt uses 'sharding' which splits the files into logical
>> chunks. This greatly reduces healing time, as VM's disk is not always
>> completely overwritten and only the shards that are different will be
>> healed.
>> >>
>> >> Maybe you should change the default shard size.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>> >>
>> >> Hi All,
>> >>
>> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
>> >>
>> >> Now in case one of the Gluster Nodes go completely out of sync,
>> roughly, how long would it take to resync? (as per your experience)
>> >> Will it impact the working of VMs in any way?
>> >> Is there anything to be taken care of, in advance, to prepare for such
>> a situation?
>> >>
>> >> Regards,
>> >>
>> >>
>> >> Indivar Nair
>> >>
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTEPGOH6VBPRSKP6WIVI6Z7YZJ6KHX4G/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-27 Thread Krutika Dhananjay
Each shard is a separate file of size equal to value of
"features.shard-block-size".
So when a brick/node was down, only those shards belonging to the VM that
were modified will be sync'd later when the brick's back up.
Does that answer your question?

-Krutika

On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:

> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
> wrote:
> >
> > Hi Strahil,
> >
> > Ok. Looks like sharding should make the resyncs faster.
> >
> > I searched for more info on it, but couldn't find much.
> > I believe it will still have to compare each shard to determine whether
> there are any changes that need to be replicated.
> > Am I right?
>
> +Krutika Dhananjay
> >
> > Regards,
> >
> > Indivar Nair
> >
> >
> >
> > On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
> >>
> >> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not always
> completely overwritten and only the shards that are different will be
> healed.
> >>
> >> Maybe you should change the default shard size.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> On Mar 27, 2019 08:24, Indivar Nair  wrote:
> >>
> >> Hi All,
> >>
> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
> >>
> >> Now in case one of the Gluster Nodes go completely out of sync,
> roughly, how long would it take to resync? (as per your experience)
> >> Will it impact the working of VMs in any way?
> >> Is there anything to be taken care of, in advance, to prepare for such
> a situation?
> >>
> >> Regards,
> >>
> >>
> >> Indivar Nair
> >>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UV7TOE7AYY7433T4UWHGIEEYRUOT5DXN/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-27 Thread Sahina Bose
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair  wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there 
> are any changes that need to be replicated.
> Am I right?

+Krutika Dhananjay
>
> Regards,
>
> Indivar Nair
>
>
>
> On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:
>>
>> By default ovirt uses 'sharding' which splits the files into logical chunks. 
>> This greatly reduces healing time, as VM's disk is not always completely 
>> overwritten and only the shards that are different will be healed.
>>
>> Maybe you should change the default shard size.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>>
>> Hi All,
>>
>> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>> We would have around 50 - 60 VMs, with an average 500GB disk size.
>>
>> Now in case one of the Gluster Nodes go completely out of sync, roughly, how 
>> long would it take to resync? (as per your experience)
>> Will it impact the working of VMs in any way?
>> Is there anything to be taken care of, in advance, to prepare for such a 
>> situation?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LBQTGIHCBJHBAKWC6VAR4UBK5K6YLVSG/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-27 Thread Indivar Nair
Hi Strahil,

Ok. Looks like sharding should make the resyncs faster.

I searched for more info on it, but couldn't find much.
I believe it will still have to compare each shard to determine whether
there are any changes that need to be replicated.
Am I right?

Regards,

Indivar Nair



On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:

> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not always
> completely overwritten and only the shards that are different will be
> healed.
>
> Maybe you should change the default shard size.
>
> Best Regards,
> Strahil Nikolov
> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>
> Hi All,
>
> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> We would have around 50 - 60 VMs, with an average 500GB disk size.
>
> Now in case one of the Gluster Nodes go completely out of sync, roughly,
> how long would it take to resync? (as per your experience)
> Will it impact the working of VMs in any way?
> Is there anything to be taken care of, in advance, to prepare for such a
> situation?
>
> Regards,
>
>
> Indivar Nair
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/


[ovirt-users] Re: Gluster VM image Resync Time

2019-03-27 Thread Strahil
By default ovirt uses 'sharding' which splits the files into logical chunks. 
This greatly reduces healing time, as VM's disk is not always completely 
overwritten and only the shards that are different will be healed.

Maybe you should change the default shard size.

Best Regards,
Strahil NikolovOn Mar 27, 2019 08:24, Indivar Nair  
wrote:
>
> Hi All,
>
> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> We would have around 50 - 60 VMs, with an average 500GB disk size.
>
> Now in case one of the Gluster Nodes go completely out of sync, roughly, how 
> long would it take to resync? (as per your experience)
> Will it impact the working of VMs in any way?
> Is there anything to be taken care of, in advance, to prepare for such a 
> situation?
>
> Regards,
>
>
> Indivar Nair
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TPHHN43J4EJOH6PFYSL5VHKSSNVC3PP/