Re: [Gluster-devel] Need inputs on patch #17985

2017-12-06 Thread Milind Changire
With the tests conducted, I could not find any evidence of a performance
regression in quick-read.



On Thu, Nov 30, 2017 at 11:01 AM, Raghavendra G <raghavendra...@gmail.com>
wrote:

> I think this caused regression in quick-read. On going through code, I
> realized Quick-read doesn't fetch content of a file pointed by dentry in
> readdirplus. Since, the patch in question prevents any lookup from
> resolver, reads on the file till the duration of "entry-timeout" (a cmdline
> option to fuse mount, whose default value is 1 sec) after the entry was
> discovered in readdirplus will not be served by quick-read even though the
> size of file is eligible to be cached. This may cause perf regression in
> read heavy workloads on smallfiles. We'll be doing more testing to identify
> this.
>
> On Tue, Sep 12, 2017 at 11:31 AM, Raghavendra G <raghavendra...@gmail.com>
> wrote:
>
>> Update. Two more days to go for the deadline. Till now, there are no open
>> issues identified against this patch.
>>
>> On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa <rgowd...@redhat.com
>> > wrote:
>>
>>>
>>>
>>> - Original Message -
>>> > From: "FNU Raghavendra Manjunath" <rab...@redhat.com>
>>> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>>> > Cc: "Raghavendra G" <raghavendra...@gmail.com>, "Nithya Balachandran"
>>> <nbala...@redhat.com>, anoo...@redhat.com,
>>> > "Gluster Devel" <gluster-devel@gluster.org>, "Raghavendra Bhat" <
>>> raghaven...@redhat.com>
>>> > Sent: Thursday, September 7, 2017 6:44:51 PM
>>> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
>>> >
>>> > From snapview client perspective one important thing to note. For
>>> building
>>> > the context for the entry point (by default ".snaps") a explicit
>>> lookup has
>>> > to be done on it. The dentry for ".snaps" is not returned when readdir
>>> is
>>> > done on its parent directory (Not even when ls -a is done). So for
>>> building
>>> > the context of .snaps (in the context snapview client saves the
>>> information
>>> > about whether it is a real inode or virtual inode) we need a lookup.
>>>
>>> Since the dentry corresponding to ".snaps" is not returned, there won't
>>> be an inode for this directory linked in itable. Also, glusterfs wouldn't
>>> have given nodeid corresponding to ".snaps" during readdir response (as
>>> dentry itself is not returned). So, kernel would do an explicit lookup
>>> before doing any operation on ".snaps" (unlike for those dentries which
>>> contain nodeid kernel can choose to skip a lookup) and we are safe. So,
>>> #17985 is safe in its current form.
>>>
>>> >
>>> > From snapview server perspective as well a lookup might be needed. In
>>> > snapview server a glfs handle is established between the snapview
>>> server
>>> > and the snapshot brick. So a inode in snapview server process contains
>>> the
>>> > glfs handle for the object being accessed from snapshot.  In snapview
>>> > server readdirp does not build the inode context (which contains the
>>> glfs
>>> > handle etc) because glfs handle is returned only in lookup.
>>>
>>> Same argument I've given holds good for this case too. Important point
>>> to note is that "there is no dentry and hence no nodeid corresponding to
>>> .snaps is passed to kernel and kernel is forced to do an explicit lookup".
>>>
>>> >
>>> > Regards,
>>> > Raghavendra
>>> >
>>> >
>>> > On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <
>>> rgowd...@redhat.com>
>>> > wrote:
>>> >
>>> > >
>>> > >
>>> > > - Original Message -
>>> > > > From: "Raghavendra G" <raghavendra...@gmail.com>
>>> > > > To: "Nithya Balachandran" <nbala...@redhat.com>
>>> > > > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>,
>>> anoo...@redhat.com,
>>> > > "Gluster Devel" <gluster-devel@gluster.org>,
>>> > > > raghaven...@redhat.com
>>> > > > Sent: Tuesday, August 29, 2017 8:52:28 AM
>>> > > > Subject: Re: [Gluster-dev

Re: [Gluster-devel] Need inputs on patch #17985

2017-11-29 Thread Raghavendra G
I think this caused regression in quick-read. On going through code, I
realized Quick-read doesn't fetch content of a file pointed by dentry in
readdirplus. Since, the patch in question prevents any lookup from
resolver, reads on the file till the duration of "entry-timeout" (a cmdline
option to fuse mount, whose default value is 1 sec) after the entry was
discovered in readdirplus will not be served by quick-read even though the
size of file is eligible to be cached. This may cause perf regression in
read heavy workloads on smallfiles. We'll be doing more testing to identify
this.

On Tue, Sep 12, 2017 at 11:31 AM, Raghavendra G <raghavendra...@gmail.com>
wrote:

> Update. Two more days to go for the deadline. Till now, there are no open
> issues identified against this patch.
>
> On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "FNU Raghavendra Manjunath" <rab...@redhat.com>
>> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> > Cc: "Raghavendra G" <raghavendra...@gmail.com>, "Nithya Balachandran" <
>> nbala...@redhat.com>, anoo...@redhat.com,
>> > "Gluster Devel" <gluster-devel@gluster.org>, "Raghavendra Bhat" <
>> raghaven...@redhat.com>
>> > Sent: Thursday, September 7, 2017 6:44:51 PM
>> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
>> >
>> > From snapview client perspective one important thing to note. For
>> building
>> > the context for the entry point (by default ".snaps") a explicit lookup
>> has
>> > to be done on it. The dentry for ".snaps" is not returned when readdir
>> is
>> > done on its parent directory (Not even when ls -a is done). So for
>> building
>> > the context of .snaps (in the context snapview client saves the
>> information
>> > about whether it is a real inode or virtual inode) we need a lookup.
>>
>> Since the dentry corresponding to ".snaps" is not returned, there won't
>> be an inode for this directory linked in itable. Also, glusterfs wouldn't
>> have given nodeid corresponding to ".snaps" during readdir response (as
>> dentry itself is not returned). So, kernel would do an explicit lookup
>> before doing any operation on ".snaps" (unlike for those dentries which
>> contain nodeid kernel can choose to skip a lookup) and we are safe. So,
>> #17985 is safe in its current form.
>>
>> >
>> > From snapview server perspective as well a lookup might be needed. In
>> > snapview server a glfs handle is established between the snapview server
>> > and the snapshot brick. So a inode in snapview server process contains
>> the
>> > glfs handle for the object being accessed from snapshot.  In snapview
>> > server readdirp does not build the inode context (which contains the
>> glfs
>> > handle etc) because glfs handle is returned only in lookup.
>>
>> Same argument I've given holds good for this case too. Important point to
>> note is that "there is no dentry and hence no nodeid corresponding to
>> .snaps is passed to kernel and kernel is forced to do an explicit lookup".
>>
>> >
>> > Regards,
>> > Raghavendra
>> >
>> >
>> > On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com>
>> > wrote:
>> >
>> > >
>> > >
>> > > - Original Message -
>> > > > From: "Raghavendra G" <raghavendra...@gmail.com>
>> > > > To: "Nithya Balachandran" <nbala...@redhat.com>
>> > > > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>,
>> anoo...@redhat.com,
>> > > "Gluster Devel" <gluster-devel@gluster.org>,
>> > > > raghaven...@redhat.com
>> > > > Sent: Tuesday, August 29, 2017 8:52:28 AM
>> > > > Subject: Re: [Gluster-devel] Need inputs on patch #17985
>> > > >
>> > > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
>> > > nbala...@redhat.com>
>> > > > wrote:
>> > > >
>> > > > > It has been a while but iirc snapview client (loaded abt dht/tier
>> etc)
>> > > had
>> > > > > some issues when we ran tiering tests. Rafi might have more info
>> on
>> > > this -
>> > > > > basically it was expecting

Re: [Gluster-devel] Need inputs on patch #17985

2017-09-12 Thread Raghavendra G
Update. Two more days to go for the deadline. Till now, there are no open
issues identified against this patch.

On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -
> > From: "FNU Raghavendra Manjunath" <rab...@redhat.com>
> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > Cc: "Raghavendra G" <raghavendra...@gmail.com>, "Nithya Balachandran" <
> nbala...@redhat.com>, anoo...@redhat.com,
> > "Gluster Devel" <gluster-devel@gluster.org>, "Raghavendra Bhat" <
> raghaven...@redhat.com>
> > Sent: Thursday, September 7, 2017 6:44:51 PM
> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> >
> > From snapview client perspective one important thing to note. For
> building
> > the context for the entry point (by default ".snaps") a explicit lookup
> has
> > to be done on it. The dentry for ".snaps" is not returned when readdir is
> > done on its parent directory (Not even when ls -a is done). So for
> building
> > the context of .snaps (in the context snapview client saves the
> information
> > about whether it is a real inode or virtual inode) we need a lookup.
>
> Since the dentry corresponding to ".snaps" is not returned, there won't be
> an inode for this directory linked in itable. Also, glusterfs wouldn't have
> given nodeid corresponding to ".snaps" during readdir response (as dentry
> itself is not returned). So, kernel would do an explicit lookup before
> doing any operation on ".snaps" (unlike for those dentries which contain
> nodeid kernel can choose to skip a lookup) and we are safe. So, #17985 is
> safe in its current form.
>
> >
> > From snapview server perspective as well a lookup might be needed. In
> > snapview server a glfs handle is established between the snapview server
> > and the snapshot brick. So a inode in snapview server process contains
> the
> > glfs handle for the object being accessed from snapshot.  In snapview
> > server readdirp does not build the inode context (which contains the glfs
> > handle etc) because glfs handle is returned only in lookup.
>
> Same argument I've given holds good for this case too. Important point to
> note is that "there is no dentry and hence no nodeid corresponding to
> .snaps is passed to kernel and kernel is forced to do an explicit lookup".
>
> >
> > Regards,
> > Raghavendra
> >
> >
> > On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com>
> > wrote:
> >
> > >
> > >
> > > - Original Message -
> > > > From: "Raghavendra G" <raghavendra...@gmail.com>
> > > > To: "Nithya Balachandran" <nbala...@redhat.com>
> > > > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com
> ,
> > > "Gluster Devel" <gluster-devel@gluster.org>,
> > > > raghaven...@redhat.com
> > > > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > > > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> > > >
> > > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> > > nbala...@redhat.com>
> > > > wrote:
> > > >
> > > > > It has been a while but iirc snapview client (loaded abt dht/tier
> etc)
> > > had
> > > > > some issues when we ran tiering tests. Rafi might have more info on
> > > this -
> > > > > basically it was expecting to find the inode_ctx populated but it
> was
> > > not.
> > > > >
> > > >
> > > > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > > > ownership of,
> > > >
> > > > * Identifying whether the patch in question causes the issue?
> > >
> > > gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> > > checked whether its the same state stored by gf_svc_lookup_cbk and it
> looks
> > > like the same state. So, I guess readdirp is handled correctly by
> > > snapview-client and an explicit lookup is not required. But, will wait
> for
> > > inputs from rabhat and rafi.
> > >
> > > [1] https://github.com/gluster/glusterfs/blob/master/xlators/
> > > features/snapview-client/src/snapview-client.c#L1962
> > >
> > > > * Send a fix or at least evaluate whether a fix is possible.
> > > >
> >

Re: [Gluster-devel] Need inputs on patch #17985

2017-09-07 Thread Raghavendra Gowdappa


- Original Message -
> From: "FNU Raghavendra Manjunath" <rab...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: "Raghavendra G" <raghavendra...@gmail.com>, "Nithya Balachandran" 
> <nbala...@redhat.com>, anoo...@redhat.com,
> "Gluster Devel" <gluster-devel@gluster.org>, "Raghavendra Bhat" 
> <raghaven...@redhat.com>
> Sent: Thursday, September 7, 2017 6:44:51 PM
> Subject: Re: [Gluster-devel] Need inputs on patch #17985
> 
> From snapview client perspective one important thing to note. For building
> the context for the entry point (by default ".snaps") a explicit lookup has
> to be done on it. The dentry for ".snaps" is not returned when readdir is
> done on its parent directory (Not even when ls -a is done). So for building
> the context of .snaps (in the context snapview client saves the information
> about whether it is a real inode or virtual inode) we need a lookup.

Since the dentry corresponding to ".snaps" is not returned, there won't be an 
inode for this directory linked in itable. Also, glusterfs wouldn't have given 
nodeid corresponding to ".snaps" during readdir response (as dentry itself is 
not returned). So, kernel would do an explicit lookup before doing any 
operation on ".snaps" (unlike for those dentries which contain nodeid kernel 
can choose to skip a lookup) and we are safe. So, #17985 is safe in its current 
form.

> 
> From snapview server perspective as well a lookup might be needed. In
> snapview server a glfs handle is established between the snapview server
> and the snapshot brick. So a inode in snapview server process contains the
> glfs handle for the object being accessed from snapshot.  In snapview
> server readdirp does not build the inode context (which contains the glfs
> handle etc) because glfs handle is returned only in lookup.

Same argument I've given holds good for this case too. Important point to note 
is that "there is no dentry and hence no nodeid corresponding to .snaps is 
passed to kernel and kernel is forced to do an explicit lookup".

> 
> Regards,
> Raghavendra
> 
> 
> On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
> wrote:
> 
> >
> >
> > - Original Message -
> > > From: "Raghavendra G" <raghavendra...@gmail.com>
> > > To: "Nithya Balachandran" <nbala...@redhat.com>
> > > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com,
> > "Gluster Devel" <gluster-devel@gluster.org>,
> > > raghaven...@redhat.com
> > > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> > >
> > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> > nbala...@redhat.com>
> > > wrote:
> > >
> > > > It has been a while but iirc snapview client (loaded abt dht/tier etc)
> > had
> > > > some issues when we ran tiering tests. Rafi might have more info on
> > this -
> > > > basically it was expecting to find the inode_ctx populated but it was
> > not.
> > > >
> > >
> > > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > > ownership of,
> > >
> > > * Identifying whether the patch in question causes the issue?
> >
> > gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> > checked whether its the same state stored by gf_svc_lookup_cbk and it looks
> > like the same state. So, I guess readdirp is handled correctly by
> > snapview-client and an explicit lookup is not required. But, will wait for
> > inputs from rabhat and rafi.
> >
> > [1] https://github.com/gluster/glusterfs/blob/master/xlators/
> > features/snapview-client/src/snapview-client.c#L1962
> >
> > > * Send a fix or at least evaluate whether a fix is possible.
> > >
> > > @Others,
> > >
> > > With the motivation of getting some traction on this, Is it ok if we:
> > > * Set a deadline of around 15 days to complete the review (or testing
> > with
> > > the patch in question) of respective components and to come up with
> > issues
> > > (if any).
> > > * Post the deadline, if there are no open issues, go ahead and merge the
> > > patch?
> > >
> > > If time is not enough, let us know and we can come up with a reasonable
> > > time.
> > >
> > > regards,
> > > Raghavendra
> > >
> >

Re: [Gluster-devel] Need inputs on patch #17985

2017-09-07 Thread FNU Raghavendra Manjunath
>From snapview client perspective one important thing to note. For building
the context for the entry point (by default ".snaps") a explicit lookup has
to be done on it. The dentry for ".snaps" is not returned when readdir is
done on its parent directory (Not even when ls -a is done). So for building
the context of .snaps (in the context snapview client saves the information
about whether it is a real inode or virtual inode) we need a lookup.

>From snapview server perspective as well a lookup might be needed. In
snapview server a glfs handle is established between the snapview server
and the snapshot brick. So a inode in snapview server process contains the
glfs handle for the object being accessed from snapshot.  In snapview
server readdirp does not build the inode context (which contains the glfs
handle etc) because glfs handle is returned only in lookup.

Regards,
Raghavendra


On Tue, Aug 29, 2017 at 12:53 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -
> > From: "Raghavendra G" <raghavendra...@gmail.com>
> > To: "Nithya Balachandran" <nbala...@redhat.com>
> > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com,
> "Gluster Devel" <gluster-devel@gluster.org>,
> > raghaven...@redhat.com
> > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> >
> > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> nbala...@redhat.com>
> > wrote:
> >
> > > It has been a while but iirc snapview client (loaded abt dht/tier etc)
> had
> > > some issues when we ran tiering tests. Rafi might have more info on
> this -
> > > basically it was expecting to find the inode_ctx populated but it was
> not.
> > >
> >
> > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > ownership of,
> >
> > * Identifying whether the patch in question causes the issue?
>
> gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> checked whether its the same state stored by gf_svc_lookup_cbk and it looks
> like the same state. So, I guess readdirp is handled correctly by
> snapview-client and an explicit lookup is not required. But, will wait for
> inputs from rabhat and rafi.
>
> [1] https://github.com/gluster/glusterfs/blob/master/xlators/
> features/snapview-client/src/snapview-client.c#L1962
>
> > * Send a fix or at least evaluate whether a fix is possible.
> >
> > @Others,
> >
> > With the motivation of getting some traction on this, Is it ok if we:
> > * Set a deadline of around 15 days to complete the review (or testing
> with
> > the patch in question) of respective components and to come up with
> issues
> > (if any).
> > * Post the deadline, if there are no open issues, go ahead and merge the
> > patch?
> >
> > If time is not enough, let us know and we can come up with a reasonable
> > time.
> >
> > regards,
> > Raghavendra
> >
> >
> > > On 24 August 2017 at 10:13, Raghavendra G <raghavendra...@gmail.com>
> > > wrote:
> > >
> > >> Note that we need to consider xlators on brick stack too. I've added
> > >> maintainers/peers of xlators on brick stack. Please explicitly
> ack/nack
> > >> whether this patch affects your component.
> > >>
> > >> For reference, following are the xlators loaded in brick stack
> > >>
> > >> storage/posix
> > >> features/trash
> > >> features/changetimerecorder
> > >> features/changelog
> > >> features/bitrot-stub
> > >> features/access-control
> > >> features/locks
> > >> features/worm
> > >> features/read-only
> > >> features/leases
> > >> features/upcall
> > >> performance/io-threads
> > >> features/selinux
> > >> features/marker
> > >> features/barrier
> > >> features/index
> > >> features/quota
> > >> debug/io-stats
> > >> performance/decompounder
> > >> protocol/server
> > >>
> > >>
> > >> For those not following this thread, the question we need to answer
> is,
> > >> "whether the xlator you are associated with works fine if a non-lookup
> > >> fop (like open, setattr, stat etc) hits it without a lookup ever being
> > >> done
> > >> on that inode"
> > >>
> > >> regards,
> > >> Raghavendra
>

Re: [Gluster-devel] Need inputs on patch #17985

2017-09-06 Thread Raghavendra G
A gentle reminder. We are midway through the proposed timeline.

On Wed, Aug 30, 2017 at 10:28 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -
> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > To: "Raghavendra G" <raghavendra...@gmail.com>
> > Cc: "Nithya Balachandran" <nbala...@redhat.com>, anoo...@redhat.com,
> "Gluster Devel" <gluster-devel@gluster.org>,
> > raghaven...@redhat.com
> > Sent: Tuesday, August 29, 2017 10:23:49 AM
> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> >
> >
> >
> > - Original Message -
> > > From: "Raghavendra G" <raghavendra...@gmail.com>
> > > To: "Nithya Balachandran" <nbala...@redhat.com>
> > > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com,
> > > "Gluster Devel" <gluster-devel@gluster.org>,
> > > raghaven...@redhat.com
> > > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> > >
> > > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <
> nbala...@redhat.com>
> > > wrote:
> > >
> > > > It has been a while but iirc snapview client (loaded abt dht/tier
> etc)
> > > > had
> > > > some issues when we ran tiering tests. Rafi might have more info on
> this
> > > > -
> > > > basically it was expecting to find the inode_ctx populated but it was
> > > > not.
> > > >
> > >
> > > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > > ownership of,
> > >
> > > * Identifying whether the patch in question causes the issue?
> >
> > gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly
> checked
> > whether its the same state stored by gf_svc_lookup_cbk and it looks like
> the
> > same state. So, I guess readdirp is handled correctly by snapview-client
> and
> > an explicit lookup is not required. But, will wait for inputs from rabhat
> > and rafi.
> >
> > [1]
> > https://github.com/gluster/glusterfs/blob/master/xlators/
> features/snapview-client/src/snapview-client.c#L1962
> >
> > > * Send a fix or at least evaluate whether a fix is possible.
> > >
> > > @Others,
> > >
> > > With the motivation of getting some traction on this, Is it ok if we:
> > > * Set a deadline of around 15 days to complete the review (or testing
> with
> > > the patch in question) of respective components and to come up with
> issues
> > > (if any).
> > > * Post the deadline, if there are no open issues, go ahead and merge
> the
> > > patch?
> > >
> > > If time is not enough, let us know and we can come up with a reasonable
> > > time.
>
> Since I don't see any objection to the deadline proposed, I am assuming
> everyone is fine with it. Post the deadline, if there are no open issues,
> patch will be merged.
>
> > >
> > > regards,
> > > Raghavendra
> > >
> > >
> > > > On 24 August 2017 at 10:13, Raghavendra G <raghavendra...@gmail.com>
> > > > wrote:
> > > >
> > > >> Note that we need to consider xlators on brick stack too. I've added
> > > >> maintainers/peers of xlators on brick stack. Please explicitly
> ack/nack
> > > >> whether this patch affects your component.
> > > >>
> > > >> For reference, following are the xlators loaded in brick stack
> > > >>
> > > >> storage/posix
> > > >> features/trash
> > > >> features/changetimerecorder
> > > >> features/changelog
> > > >> features/bitrot-stub
> > > >> features/access-control
> > > >> features/locks
> > > >> features/worm
> > > >> features/read-only
> > > >> features/leases
> > > >> features/upcall
> > > >> performance/io-threads
> > > >> features/selinux
> > > >> features/marker
> > > >> features/barrier
> > > >> features/index
> > > >> features/quota
> > > >> debug/io-stats
> > > >> performance/decompounder
> > > >> protocol/server
> > > >>
> > > >>
> > > >> For those not following this thread, the question we nee

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-29 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Raghavendra G" <raghavendra...@gmail.com>
> Cc: "Nithya Balachandran" <nbala...@redhat.com>, anoo...@redhat.com, "Gluster 
> Devel" <gluster-devel@gluster.org>,
> raghaven...@redhat.com
> Sent: Tuesday, August 29, 2017 10:23:49 AM
> Subject: Re: [Gluster-devel] Need inputs on patch #17985
> 
> 
> 
> - Original Message -
> > From: "Raghavendra G" <raghavendra...@gmail.com>
> > To: "Nithya Balachandran" <nbala...@redhat.com>
> > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com,
> > "Gluster Devel" <gluster-devel@gluster.org>,
> > raghaven...@redhat.com
> > Sent: Tuesday, August 29, 2017 8:52:28 AM
> > Subject: Re: [Gluster-devel] Need inputs on patch #17985
> > 
> > On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <nbala...@redhat.com>
> > wrote:
> > 
> > > It has been a while but iirc snapview client (loaded abt dht/tier etc)
> > > had
> > > some issues when we ran tiering tests. Rafi might have more info on this
> > > -
> > > basically it was expecting to find the inode_ctx populated but it was
> > > not.
> > >
> > 
> > Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> > ownership of,
> > 
> > * Identifying whether the patch in question causes the issue?
> 
> gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly checked
> whether its the same state stored by gf_svc_lookup_cbk and it looks like the
> same state. So, I guess readdirp is handled correctly by snapview-client and
> an explicit lookup is not required. But, will wait for inputs from rabhat
> and rafi.
> 
> [1]
> https://github.com/gluster/glusterfs/blob/master/xlators/features/snapview-client/src/snapview-client.c#L1962
> 
> > * Send a fix or at least evaluate whether a fix is possible.
> > 
> > @Others,
> > 
> > With the motivation of getting some traction on this, Is it ok if we:
> > * Set a deadline of around 15 days to complete the review (or testing with
> > the patch in question) of respective components and to come up with issues
> > (if any).
> > * Post the deadline, if there are no open issues, go ahead and merge the
> > patch?
> > 
> > If time is not enough, let us know and we can come up with a reasonable
> > time.

Since I don't see any objection to the deadline proposed, I am assuming 
everyone is fine with it. Post the deadline, if there are no open issues, patch 
will be merged.

> > 
> > regards,
> > Raghavendra
> > 
> > 
> > > On 24 August 2017 at 10:13, Raghavendra G <raghavendra...@gmail.com>
> > > wrote:
> > >
> > >> Note that we need to consider xlators on brick stack too. I've added
> > >> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
> > >> whether this patch affects your component.
> > >>
> > >> For reference, following are the xlators loaded in brick stack
> > >>
> > >> storage/posix
> > >> features/trash
> > >> features/changetimerecorder
> > >> features/changelog
> > >> features/bitrot-stub
> > >> features/access-control
> > >> features/locks
> > >> features/worm
> > >> features/read-only
> > >> features/leases
> > >> features/upcall
> > >> performance/io-threads
> > >> features/selinux
> > >> features/marker
> > >> features/barrier
> > >> features/index
> > >> features/quota
> > >> debug/io-stats
> > >> performance/decompounder
> > >> protocol/server
> > >>
> > >>
> > >> For those not following this thread, the question we need to answer is,
> > >> "whether the xlator you are associated with works fine if a non-lookup
> > >> fop (like open, setattr, stat etc) hits it without a lookup ever being
> > >> done
> > >> on that inode"
> > >>
> > >> regards,
> > >> Raghavendra
> > >>
> > >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> > >> rgowd...@redhat.com> wrote:
> > >>
> > >>> Thanks Pranith and Ashish for your inputs.
> > >>>
> > >>> - Original 

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-28 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra G" <raghavendra...@gmail.com>
> To: "Nithya Balachandran" <nbala...@redhat.com>
> Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, anoo...@redhat.com, 
> "Gluster Devel" <gluster-devel@gluster.org>,
> raghaven...@redhat.com
> Sent: Tuesday, August 29, 2017 8:52:28 AM
> Subject: Re: [Gluster-devel] Need inputs on patch #17985
> 
> On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran <nbala...@redhat.com>
> wrote:
> 
> > It has been a while but iirc snapview client (loaded abt dht/tier etc) had
> > some issues when we ran tiering tests. Rafi might have more info on this -
> > basically it was expecting to find the inode_ctx populated but it was not.
> >
> 
> Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
> ownership of,
> 
> * Identifying whether the patch in question causes the issue?

gf_svc_readdirp_cbk is setting relevant state in inode [1]. I quickly checked 
whether its the same state stored by gf_svc_lookup_cbk and it looks like the 
same state. So, I guess readdirp is handled correctly by snapview-client and an 
explicit lookup is not required. But, will wait for inputs from rabhat and rafi.

[1] 
https://github.com/gluster/glusterfs/blob/master/xlators/features/snapview-client/src/snapview-client.c#L1962

> * Send a fix or at least evaluate whether a fix is possible.
> 
> @Others,
> 
> With the motivation of getting some traction on this, Is it ok if we:
> * Set a deadline of around 15 days to complete the review (or testing with
> the patch in question) of respective components and to come up with issues
> (if any).
> * Post the deadline, if there are no open issues, go ahead and merge the
> patch?
> 
> If time is not enough, let us know and we can come up with a reasonable
> time.
> 
> regards,
> Raghavendra
> 
> 
> > On 24 August 2017 at 10:13, Raghavendra G <raghavendra...@gmail.com>
> > wrote:
> >
> >> Note that we need to consider xlators on brick stack too. I've added
> >> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
> >> whether this patch affects your component.
> >>
> >> For reference, following are the xlators loaded in brick stack
> >>
> >> storage/posix
> >> features/trash
> >> features/changetimerecorder
> >> features/changelog
> >> features/bitrot-stub
> >> features/access-control
> >> features/locks
> >> features/worm
> >> features/read-only
> >> features/leases
> >> features/upcall
> >> performance/io-threads
> >> features/selinux
> >> features/marker
> >> features/barrier
> >> features/index
> >> features/quota
> >> debug/io-stats
> >> performance/decompounder
> >> protocol/server
> >>
> >>
> >> For those not following this thread, the question we need to answer is,
> >> "whether the xlator you are associated with works fine if a non-lookup
> >> fop (like open, setattr, stat etc) hits it without a lookup ever being
> >> done
> >> on that inode"
> >>
> >> regards,
> >> Raghavendra
> >>
> >> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> >> rgowd...@redhat.com> wrote:
> >>
> >>> Thanks Pranith and Ashish for your inputs.
> >>>
> >>> - Original Message -
> >>> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> >>> > To: "Ashish Pandey" <aspan...@redhat.com>
> >>> > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Xavier Hernandez" <
> >>> xhernan...@datalab.es>, "Gluster Devel"
> >>> > <gluster-devel@gluster.org>
> >>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
> >>> > Subject: Re: Need inputs on patch #17985
> >>> >
> >>> > Raghavendra,
> >>> > As Ashish mentioned, there aren't any known problems if upper
> >>> xlators
> >>> > don't send lookups in EC at the moment.
> >>> >
> >>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey <aspan...@redhat.com>
> >>> wrote:
> >>> >
> >>> > > Raghvendra,
> >>> > >
> >>> > > I have provided my comment on this patch.
> >>> > > I think EC will not have any issue with this approach.
> >>> > 

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-28 Thread Raghavendra G
On Thu, Aug 24, 2017 at 2:53 PM, Nithya Balachandran 
wrote:

> It has been a while but iirc snapview client (loaded abt dht/tier etc) had
> some issues when we ran tiering tests. Rafi might have more info on this -
> basically it was expecting to find the inode_ctx populated but it was not.
>

Thanks Nithya. @Rafi, @Raghavendra Bhat, is it possible to take the
ownership of,

* Identifying whether the patch in question causes the issue?
* Send a fix or at least evaluate whether a fix is possible.

@Others,

With the motivation of getting some traction on this, Is it ok if we:
* Set a deadline of around 15 days to complete the review (or testing with
the patch in question) of respective components and to come up with issues
(if any).
* Post the deadline, if there are no open issues, go ahead and merge the
patch?

If time is not enough, let us know and we can come up with a reasonable
time.

regards,
Raghavendra


> On 24 August 2017 at 10:13, Raghavendra G 
> wrote:
>
>> Note that we need to consider xlators on brick stack too. I've added
>> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
>> whether this patch affects your component.
>>
>> For reference, following are the xlators loaded in brick stack
>>
>> storage/posix
>> features/trash
>> features/changetimerecorder
>> features/changelog
>> features/bitrot-stub
>> features/access-control
>> features/locks
>> features/worm
>> features/read-only
>> features/leases
>> features/upcall
>> performance/io-threads
>> features/selinux
>> features/marker
>> features/barrier
>> features/index
>> features/quota
>> debug/io-stats
>> performance/decompounder
>> protocol/server
>>
>>
>> For those not following this thread, the question we need to answer is,
>> "whether the xlator you are associated with works fine if a non-lookup
>> fop (like open, setattr, stat etc) hits it without a lookup ever being done
>> on that inode"
>>
>> regards,
>> Raghavendra
>>
>> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> Thanks Pranith and Ashish for your inputs.
>>>
>>> - Original Message -
>>> > From: "Pranith Kumar Karampuri" 
>>> > To: "Ashish Pandey" 
>>> > Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" <
>>> xhernan...@datalab.es>, "Gluster Devel"
>>> > 
>>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
>>> > Subject: Re: Need inputs on patch #17985
>>> >
>>> > Raghavendra,
>>> > As Ashish mentioned, there aren't any known problems if upper
>>> xlators
>>> > don't send lookups in EC at the moment.
>>> >
>>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey 
>>> wrote:
>>> >
>>> > > Raghvendra,
>>> > >
>>> > > I have provided my comment on this patch.
>>> > > I think EC will not have any issue with this approach.
>>> > > However, I would welcome comments from Xavi and Pranith too for any
>>> side
>>> > > effects which I may not be able to foresee.
>>> > >
>>> > > Ashish
>>> > >
>>> > > --
>>> > > *From: *"Raghavendra Gowdappa" 
>>> > > *To: *"Ashish Pandey" 
>>> > > *Cc: *"Pranith Kumar Karampuri" , "Xavier
>>> Hernandez"
>>> > > , "Gluster Devel" 
>>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
>>> > > *Subject: *Need inputs on patch #17985
>>> > >
>>> > >
>>> > > Hi Ashish,
>>> > >
>>> > > Following are the blockers for making a decision on whether patch
>>> [1] can
>>> > > be merged or not:
>>> > > * Evaluation of dentry operations (like rename etc) in dht
>>> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
>>> chmod
>>> > > etc) hits EC without a single lookup performed on file/inode
>>> > >
>>> > > Can you please comment on the patch? I'll take care of dht part.
>>> > >
>>> > > [1] https://review.gluster.org/#/c/17985/
>>> > >
>>> > > regards,
>>> > > Raghavendra
>>> > >
>>> > >
>>> >
>>> >
>>> > --
>>> > Pranith
>>> >
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> --
>>> Raghavendra G
>>>
>>> 
>>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>


-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-24 Thread Nithya Balachandran
It has been a while but iirc snapview client (loaded abt dht/tier etc) had
some issues when we ran tiering tests. Rafi might have more info on this -
basically it was expecting to find the inode_ctx populated but it was not.

On 24 August 2017 at 10:13, Raghavendra G  wrote:

> Note that we need to consider xlators on brick stack too. I've added
> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
> whether this patch affects your component.
>
> For reference, following are the xlators loaded in brick stack
>
> storage/posix
> features/trash
> features/changetimerecorder
> features/changelog
> features/bitrot-stub
> features/access-control
> features/locks
> features/worm
> features/read-only
> features/leases
> features/upcall
> performance/io-threads
> features/selinux
> features/marker
> features/barrier
> features/index
> features/quota
> debug/io-stats
> performance/decompounder
> protocol/server
>
>
> For those not following this thread, the question we need to answer is,
> "whether the xlator you are associated with works fine if a non-lookup fop
> (like open, setattr, stat etc) hits it without a lookup ever being done on
> that inode"
>
> regards,
> Raghavendra
>
> On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
>
>> Thanks Pranith and Ashish for your inputs.
>>
>> - Original Message -
>> > From: "Pranith Kumar Karampuri" 
>> > To: "Ashish Pandey" 
>> > Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" <
>> xhernan...@datalab.es>, "Gluster Devel"
>> > 
>> > Sent: Wednesday, August 23, 2017 11:55:19 AM
>> > Subject: Re: Need inputs on patch #17985
>> >
>> > Raghavendra,
>> > As Ashish mentioned, there aren't any known problems if upper
>> xlators
>> > don't send lookups in EC at the moment.
>> >
>> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey 
>> wrote:
>> >
>> > > Raghvendra,
>> > >
>> > > I have provided my comment on this patch.
>> > > I think EC will not have any issue with this approach.
>> > > However, I would welcome comments from Xavi and Pranith too for any
>> side
>> > > effects which I may not be able to foresee.
>> > >
>> > > Ashish
>> > >
>> > > --
>> > > *From: *"Raghavendra Gowdappa" 
>> > > *To: *"Ashish Pandey" 
>> > > *Cc: *"Pranith Kumar Karampuri" , "Xavier
>> Hernandez"
>> > > , "Gluster Devel" 
>> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
>> > > *Subject: *Need inputs on patch #17985
>> > >
>> > >
>> > > Hi Ashish,
>> > >
>> > > Following are the blockers for making a decision on whether patch [1]
>> can
>> > > be merged or not:
>> > > * Evaluation of dentry operations (like rename etc) in dht
>> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
>> chmod
>> > > etc) hits EC without a single lookup performed on file/inode
>> > >
>> > > Can you please comment on the patch? I'll take care of dht part.
>> > >
>> > > [1] https://review.gluster.org/#/c/17985/
>> > >
>> > > regards,
>> > > Raghavendra
>> > >
>> > >
>> >
>> >
>> > --
>> > Pranith
>> >
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> --
>> Raghavendra G
>>
>> 
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-23 Thread Raghavendra G
Note that we need to consider xlators on brick stack too. I've added
maintainers/peers of xlators on brick stack. Please explicitly ack/nack
whether this patch affects your component.

For reference, following are the xlators loaded in brick stack

storage/posix
features/trash
features/changetimerecorder
features/changelog
features/bitrot-stub
features/access-control
features/locks
features/worm
features/read-only
features/leases
features/upcall
performance/io-threads
features/selinux
features/marker
features/barrier
features/index
features/quota
debug/io-stats
performance/decompounder
protocol/server


For those not following this thread, the question we need to answer is,
"whether the xlator you are associated with works fine if a non-lookup fop
(like open, setattr, stat etc) hits it without a lookup ever being done on
that inode"

regards,
Raghavendra

On Wed, Aug 23, 2017 at 11:56 AM, Raghavendra Gowdappa 
wrote:

> Thanks Pranith and Ashish for your inputs.
>
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Ashish Pandey" 
> > Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" <
> xhernan...@datalab.es>, "Gluster Devel"
> > 
> > Sent: Wednesday, August 23, 2017 11:55:19 AM
> > Subject: Re: Need inputs on patch #17985
> >
> > Raghavendra,
> > As Ashish mentioned, there aren't any known problems if upper xlators
> > don't send lookups in EC at the moment.
> >
> > On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey 
> wrote:
> >
> > > Raghvendra,
> > >
> > > I have provided my comment on this patch.
> > > I think EC will not have any issue with this approach.
> > > However, I would welcome comments from Xavi and Pranith too for any
> side
> > > effects which I may not be able to foresee.
> > >
> > > Ashish
> > >
> > > --
> > > *From: *"Raghavendra Gowdappa" 
> > > *To: *"Ashish Pandey" 
> > > *Cc: *"Pranith Kumar Karampuri" , "Xavier
> Hernandez"
> > > , "Gluster Devel" 
> > > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> > > *Subject: *Need inputs on patch #17985
> > >
> > >
> > > Hi Ashish,
> > >
> > > Following are the blockers for making a decision on whether patch [1]
> can
> > > be merged or not:
> > > * Evaluation of dentry operations (like rename etc) in dht
> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
> chmod
> > > etc) hits EC without a single lookup performed on file/inode
> > >
> > > Can you please comment on the patch? I'll take care of dht part.
> > >
> > > [1] https://review.gluster.org/#/c/17985/
> > >
> > > regards,
> > > Raghavendra
> > >
> > >
> >
> >
> > --
> > Pranith
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
> --
> Raghavendra G
>
> 
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-23 Thread Raghavendra Gowdappa
Thanks Pranith and Ashish for your inputs.

- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Ashish Pandey" 
> Cc: "Raghavendra Gowdappa" , "Xavier Hernandez" 
> , "Gluster Devel"
> 
> Sent: Wednesday, August 23, 2017 11:55:19 AM
> Subject: Re: Need inputs on patch #17985
> 
> Raghavendra,
> As Ashish mentioned, there aren't any known problems if upper xlators
> don't send lookups in EC at the moment.
> 
> On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey  wrote:
> 
> > Raghvendra,
> >
> > I have provided my comment on this patch.
> > I think EC will not have any issue with this approach.
> > However, I would welcome comments from Xavi and Pranith too for any side
> > effects which I may not be able to foresee.
> >
> > Ashish
> >
> > --
> > *From: *"Raghavendra Gowdappa" 
> > *To: *"Ashish Pandey" 
> > *Cc: *"Pranith Kumar Karampuri" , "Xavier Hernandez"
> > , "Gluster Devel" 
> > *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> > *Subject: *Need inputs on patch #17985
> >
> >
> > Hi Ashish,
> >
> > Following are the blockers for making a decision on whether patch [1] can
> > be merged or not:
> > * Evaluation of dentry operations (like rename etc) in dht
> > * Whether EC works fine if a non-lookup fop (like open(dir), stat, chmod
> > etc) hits EC without a single lookup performed on file/inode
> >
> > Can you please comment on the patch? I'll take care of dht part.
> >
> > [1] https://review.gluster.org/#/c/17985/
> >
> > regards,
> > Raghavendra
> >
> >
> 
> 
> --
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Need inputs on patch #17985

2017-08-23 Thread Pranith Kumar Karampuri
Raghavendra,
As Ashish mentioned, there aren't any known problems if upper xlators
don't send lookups in EC at the moment.

On Wed, Aug 23, 2017 at 9:07 AM, Ashish Pandey  wrote:

> Raghvendra,
>
> I have provided my comment on this patch.
> I think EC will not have any issue with this approach.
> However, I would welcome comments from Xavi and Pranith too for any side
> effects which I may not be able to foresee.
>
> Ashish
>
> --
> *From: *"Raghavendra Gowdappa" 
> *To: *"Ashish Pandey" 
> *Cc: *"Pranith Kumar Karampuri" , "Xavier Hernandez"
> , "Gluster Devel" 
> *Sent: *Wednesday, August 23, 2017 8:29:48 AM
> *Subject: *Need inputs on patch #17985
>
>
> Hi Ashish,
>
> Following are the blockers for making a decision on whether patch [1] can
> be merged or not:
> * Evaluation of dentry operations (like rename etc) in dht
> * Whether EC works fine if a non-lookup fop (like open(dir), stat, chmod
> etc) hits EC without a single lookup performed on file/inode
>
> Can you please comment on the patch? I'll take care of dht part.
>
> [1] https://review.gluster.org/#/c/17985/
>
> regards,
> Raghavendra
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-22 Thread Ashish Pandey
Raghvendra, 

I have provided my comment on this patch. 
I think EC will not have any issue with this approach. 
However, I would welcome comments from Xavi and Pranith too for any side 
effects which I may not be able to foresee. 

Ashish 

- Original Message -

From: "Raghavendra Gowdappa"  
To: "Ashish Pandey"  
Cc: "Pranith Kumar Karampuri" , "Xavier Hernandez" 
, "Gluster Devel"  
Sent: Wednesday, August 23, 2017 8:29:48 AM 
Subject: Need inputs on patch #17985 

Hi Ashish, 

Following are the blockers for making a decision on whether patch [1] can be 
merged or not: 
* Evaluation of dentry operations (like rename etc) in dht 
* Whether EC works fine if a non-lookup fop (like open(dir), stat, chmod etc) 
hits EC without a single lookup performed on file/inode 

Can you please comment on the patch? I'll take care of dht part. 

[1] https://review.gluster.org/#/c/17985/ 

regards, 
Raghavendra 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Need inputs on patch #17985

2017-08-22 Thread Raghavendra Gowdappa
Hi Ashish,

Following are the blockers for making a decision on whether patch [1] can be 
merged or not:
* Evaluation of dentry operations (like rename etc) in dht
* Whether EC works fine if a non-lookup fop (like open(dir), stat, chmod etc) 
hits EC without a single lookup performed on file/inode

Can you please comment on the patch? I'll take care of dht part.

[1] https://review.gluster.org/#/c/17985/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel