Thanks, Strahil,
Sorry, got busy with some other work.
But, better late than never.
Regards,
Indivar Nair
On Wed, Mar 27, 2019 at 4:34 PM Strahil wrote:
> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not alw
Thanks, Krutika,
Sorry, got busy with some other work.
But, better late than never.
Regards,
Indivar Nair
On Thu, Mar 28, 2019 at 12:26 PM Krutika Dhananjay
wrote:
> Right. So Gluster stores what are called "indices" for each modified file
> (or shard)
> under a special hidden directory of t
On Thu, Mar 28, 2019 at 2:28 PM Krutika Dhananjay
wrote:
> Gluster 5.x does have two important performance-related fixes that are not
> part of 3.12.x -
> i. in shard-replicate interaction -
> https://bugzilla.redhat.com/show_bug.cgi?id=1635972
>
Sorry, wrong bug-id. This should be
https://bugz
Gluster 5.x does have two important performance-related fixes that are not
part of 3.12.x -
i. in shard-replicate interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635972
ii. in qemu-gluster-fuse interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635980
The two fixes do improve w
Hi Krutika,
I have noticed some performance penalties (10%-15%) when using sharing in
v3.12 .
What is the situation now with 5.5 ?
Best Regards,
Strahil NikolovOn Mar 28, 2019 08:56, Krutika Dhananjay
wrote:
>
> Right. So Gluster stores what are called "indices" for each modified file (or
>
Right. So Gluster stores what are called "indices" for each modified file
(or shard)
under a special hidden directory of the "good" bricks at
$BRICK_PATH/.glusterfs/indices/xattrop.
When the offline brick comes back up, the file corresponding to each index
is healed, and then the index deleted
to m
Hi Krutika,
So how does the Gluster node know which shards were modified after it went
down?
Do the other Gluster nodes keep track of it?
Regards,
Indivar Nair
On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay
wrote:
> Each shard is a separate file of size equal to value of
> "features.shar
Each shard is a separate file of size equal to value of
"features.shard-block-size".
So when a brick/node was down, only those shards belonging to the VM that
were modified will be sync'd later when the brick's back up.
Does that answer your question?
-Krutika
On Wed, Mar 27, 2019 at 7:48 PM Sahi
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there
> are any changes that need t
Hi Strahil,
Ok. Looks like sharding should make the resyncs faster.
I searched for more info on it, but couldn't find much.
I believe it will still have to compare each shard to determine whether
there are any changes that need to be replicated.
Am I right?
Regards,
Indivar Nair
On Wed, Mar
By default ovirt uses 'sharding' which splits the files into logical chunks.
This greatly reduces healing time, as VM's disk is not always completely
overwritten and only the shards that are different will be healed.
Maybe you should change the default shard size.
Best Regards,
Strahil NikolovO
11 matches
Mail list logo