From: "Anand Avati"
> > > To: "Jeff Darcy"
> > > Cc: "Pranith Kumar Karampuri" , "Anand Avati" <
> > aav...@redhat.com>, "Raghavan Pichai"
> > > , "Ravishankar Narayanankutty" ,
> > "d
rcy"
Cc: "Pranith Kumar Karampuri" , "Anand Avati" <
aav...@redhat.com>, "Raghavan Pichai"
, "Ravishankar Narayanankutty" ,
"devel"
Sent: Wednesday, May 22, 2013 1:19:19 AM
Subject: Re: [Gluster-devel] Proposal to change locking i
t;
> aav...@redhat.com>, "Raghavan Pichai"
> > , "Ravishankar Narayanankutty" ,
> "devel"
> > Sent: Wednesday, May 22, 2013 1:19:19 AM
> > Subject: Re: [Gluster-devel] Proposal to change locking in data-self-heal
> >
> > On Tue,
On 05/22/2013 08:57 AM, Pranith Kumar Karampuri wrote:
> So you guys are OK with this proposal if we solve version
> compatibility issues?
For myself, yes, I'd say so.
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mai
- Original Message -
> From: "Anand Avati"
> To: "Jeff Darcy"
> Cc: "Pranith Kumar Karampuri" , "Anand Avati"
> , "Raghavan Pichai"
> , "Ravishankar Narayanankutty" ,
> "devel"
> Sent: Wedn
Maybe a different approach could solve some of these problems and
improve responsiveness. It's an architectural change so I'm not sure if
it's the right moment to discuss it, but at least it could be considered
for the future. There are a lot of details to consider, so do not take
this as a ful
On Tue, May 21, 2013 at 7:05 AM, Jeff Darcy wrote:
> On 05/21/2013 09:10 AM, Pranith Kumar Karampuri wrote:
>
>> scenario-1 won't happen because there exists a chance for it to acquire
>> truncate's full file lock after any 128k range sync happens.
>>
>> Scenario-2 won't happen because extra self
On Tue, 21 May 2013 10:30:46 -0400
Jeff Darcy wrote:
> On 05/21/2013 10:10 AM, Stephan von Krawczynski wrote:
> > See it as a corner case of a configurable option like:
> >
> > self-heal-chunksize = X
> > 128k < X < (unsigned)-1 (meaning all bits 1, don't know how many you have
> > here :-)
>
>
On 05/21/2013 10:10 AM, Stephan von Krawczynski wrote:
See it as a corner case of a configurable option like:
self-heal-chunksize = X
128k < X < (unsigned)-1 (meaning all bits 1, don't know how many you have
here :-)
Unfortunately that doesn't quite work because a whole-file lock covers more
On Tue, 21 May 2013 09:58:46 -0400
Jeff Darcy wrote:
> [...]
> That's actually how it used to work, which led to many complaints from users
> who would see stalls accessing large files (most often VM images) over GigE
> while self-heal was in progress. Many considered it a show-stopper, and th
On 05/21/2013 09:10 AM, Pranith Kumar Karampuri wrote:
scenario-1 won't happen because there exists a chance for it to acquire
truncate's full file lock after any 128k range sync happens.
Scenario-2 won't happen because extra self-heals that are launched on the
same file will be blocked in self-
On 05/21/2013 09:30 AM, Stephan von Krawczynski wrote:
I am not quite sure if I understood the issue in full detail. But are you
saying that you "split up" the current self-healing file in 128K chunks
with locking/unlocking (over the network)? It sounds a bit like the locking
takes more (cpu) tim
On Tue, 21 May 2013 09:10:18 -0400 (EDT)
Pranith Kumar Karampuri wrote:
> [...]
> Solution:
> Since we want to prevent two parallel self-heals. We let them compete in a
> separate "domain". Lets call the domain on which the locks have been taken on
> in previous approach as "data-domain".
>
>
Hi,
This idea is proposed by Brian Foster as a solution to several hangs we
faced during self-heal + truncate situation or two self-heals triggered on the
same file situation.
Problem:
Scenario-1:
At the moment when data-self-heal is triggered on a file, until the self-heal
is complete, ext
14 matches
Mail list logo