Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Raghavendra Gowdappa


- Original Message -
> From: "Xavier Hernandez" 
> To: "Pranith Kumar Karampuri" , "Raghavendra G" 
> 
> Cc: "Gluster Devel" 
> Sent: Wednesday, June 1, 2016 11:57:12 AM
> Subject: Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable 
> afr subvols
> 
> Oops, you are right. For entry operations the current version of the
> parent directory is not checked, just to avoid this problem.
> 
> This means that mkdir will be sent to all alive subvolumes. However it
> still selects the group of answers that have a minimum quorum equal or
> greater than #bricks - redundancy. So it should be still valid.

What if the quorum is met on "bad" subvolumes? and mkdir was successful on bad 
subvolumes? Do we consider mkdir as successful? If yes, even EC suffers from 
the problem described in bz https://bugzilla.redhat.com/show_bug.cgi?id=1341429.

> 
> Xavi
> 
> On 01/06/16 06:51, Pranith Kumar Karampuri wrote:
> > Xavi,
> > But if we keep winding only to good subvolumes, there is a case
> > where bad subvolumes will never catch up right? i.e. if we keep creating
> > files in same directory and everytime self-heal completes there are more
> > entries mounts would have created on the good subvolumes alone. I think
> > I must have missed this in the reviews if this is the current behavior.
> > It was not in the earlier releases. Right?
> >
> > Pranith
> >
> > On Tue, May 31, 2016 at 2:17 PM, Raghavendra G  > > wrote:
> >
> >
> >
> > On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez
> > mailto:xhernan...@datalab.es>> wrote:
> >
> > Hi,
> >
> > On 31/05/16 07:05, Raghavendra Gowdappa wrote:
> >
> > +gluster-devel, +Xavi
> >
> > Hi all,
> >
> > The context is [1], where bricks do pre-operation checks
> > before doing a fop and proceed with fop only if pre-op check
> > is successful.
> >
> > @Xavi,
> >
> > We need your inputs on behavior of EC subvolumes as well.
> >
> >
> > If I understand correctly, EC shouldn't have any problems here.
> >
> > EC sends the mkdir request to all subvolumes that are currently
> > considered "good" and tries to combine the answers. Answers that
> > match in return code, errno (if necessary) and xdata contents
> > (except for some special xattrs that are ignored for combination
> > purposes), are grouped.
> >
> > Then it takes the group with more members/answers. If that group
> > has a minimum size of #bricks - redundancy, it is considered the
> > good answer. Otherwise EIO is returned because bricks are in an
> > inconsistent state.
> >
> > If there's any answer in another group, it's considered bad and
> > gets marked so that self-heal will repair it using the good
> > information from the majority of bricks.
> >
> > xdata is combined and returned even if return code is -1.
> >
> > Is that enough to cover the needed behavior ?
> >
> >
> > Thanks Xavi. That's sufficient for the feature in question. One of
> > the main cases I was interested in was what would be the behaviour
> > if mkdir succeeds on "bad" subvolume and fails on "good" subvolume.
> > Since you never wind mkdir to "bad" subvolume(s), this situation
> > never arises.
> >
> >
> >
> >
> > Xavi
> >
> >
> >
> > [1] http://review.gluster.org/13885
> >
> > regards,
> > Raghavendra
> >
> > - Original Message -
> >
> > From: "Pranith Kumar Karampuri"  > >
> > To: "Raghavendra Gowdappa"  > >
> > Cc: "team-quine-afr"  > >, "rhs-zteam"
> > mailto:rhs-zt...@redhat.com>>
> > Sent: Tuesday, May 31, 2016 10:22:49 AM
> > Subject: Re: dht mkdir preop check, afr and
> > (non-)readable afr subvols
> >
> > I think you should start a discussion on gluster-devel
> > so that Xavi gets a
> > chance to respond on the mails as well.
> >
> > On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa
> > mailto:rgowd...@redhat.com>>
> > wrote:
> >
> > Also note that we've plans to extend this pre-op
> > check to all dentry
> > operations which also depend parent layout. So, the
> > discussion need to
> > cover all dentry operations like:
> >
> > 1. create
> > 2. mkdir
> > 3. rmdir
> > 4. mknod
> > 5. symlink
> > 6. unlink
> >

Re: [Gluster-devel] Gerrit (review.gluster.org/git.gluster.org) downtime in 15 mins

2016-05-31 Thread Susant Palai
Getting "server error" while trying to login.

Regards,
Susant

- Original Message -
From: "Nigel Babu" 
To: "gluster-devel" , "gluster-infra" 

Sent: Wednesday, 1 June, 2016 8:40:27 AM
Subject: Re: [Gluster-devel] Gerrit (review.gluster.org/git.gluster.org)
downtime in 15 mins




Hello folks, 

The upgrade is now complete! Please let me know if you notice anything wrong. 



On Wed, Jun 1, 2016 at 7:45 AM, Nigel Babu < nig...@redhat.com > wrote: 




Hello, 

I'll be bring down Gerrit for upgrade to the latest version. I'll update this 
thread when the upgrade is completed. 




-- 


nigelb 



-- 


nigelb 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Xavier Hernandez
Oops, you are right. For entry operations the current version of the 
parent directory is not checked, just to avoid this problem.


This means that mkdir will be sent to all alive subvolumes. However it 
still selects the group of answers that have a minimum quorum equal or 
greater than #bricks - redundancy. So it should be still valid.


Xavi

On 01/06/16 06:51, Pranith Kumar Karampuri wrote:

Xavi,
But if we keep winding only to good subvolumes, there is a case
where bad subvolumes will never catch up right? i.e. if we keep creating
files in same directory and everytime self-heal completes there are more
entries mounts would have created on the good subvolumes alone. I think
I must have missed this in the reviews if this is the current behavior.
It was not in the earlier releases. Right?

Pranith

On Tue, May 31, 2016 at 2:17 PM, Raghavendra G mailto:raghaven...@gluster.com>> wrote:



On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez
mailto:xhernan...@datalab.es>> wrote:

Hi,

On 31/05/16 07:05, Raghavendra Gowdappa wrote:

+gluster-devel, +Xavi

Hi all,

The context is [1], where bricks do pre-operation checks
before doing a fop and proceed with fop only if pre-op check
is successful.

@Xavi,

We need your inputs on behavior of EC subvolumes as well.


If I understand correctly, EC shouldn't have any problems here.

EC sends the mkdir request to all subvolumes that are currently
considered "good" and tries to combine the answers. Answers that
match in return code, errno (if necessary) and xdata contents
(except for some special xattrs that are ignored for combination
purposes), are grouped.

Then it takes the group with more members/answers. If that group
has a minimum size of #bricks - redundancy, it is considered the
good answer. Otherwise EIO is returned because bricks are in an
inconsistent state.

If there's any answer in another group, it's considered bad and
gets marked so that self-heal will repair it using the good
information from the majority of bricks.

xdata is combined and returned even if return code is -1.

Is that enough to cover the needed behavior ?


Thanks Xavi. That's sufficient for the feature in question. One of
the main cases I was interested in was what would be the behaviour
if mkdir succeeds on "bad" subvolume and fails on "good" subvolume.
Since you never wind mkdir to "bad" subvolume(s), this situation
never arises.




Xavi



[1] http://review.gluster.org/13885

regards,
Raghavendra

- Original Message -

From: "Pranith Kumar Karampuri" mailto:pkara...@redhat.com>>
To: "Raghavendra Gowdappa" mailto:rgowd...@redhat.com>>
Cc: "team-quine-afr" mailto:team-quine-...@redhat.com>>, "rhs-zteam"
mailto:rhs-zt...@redhat.com>>
Sent: Tuesday, May 31, 2016 10:22:49 AM
Subject: Re: dht mkdir preop check, afr and
(non-)readable afr subvols

I think you should start a discussion on gluster-devel
so that Xavi gets a
chance to respond on the mails as well.

On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa
mailto:rgowd...@redhat.com>>
wrote:

Also note that we've plans to extend this pre-op
check to all dentry
operations which also depend parent layout. So, the
discussion need to
cover all dentry operations like:

1. create
2. mkdir
3. rmdir
4. mknod
5. symlink
6. unlink
7. rename

We also plan to have similar checks in lock codepath
for directories too
(planning to use hashed-subvolume as lock-subvolume
for directories). So,
more fops :)
8. lk (posix locks)
9. inodelk
10. entrylk

regards,
Raghavendra

- Original Message -

From: "Raghavendra Gowdappa"
mailto:rgowd...@redhat.com>>
To: "team-quine-afr" mailto:team-quine-...@redhat.com>>
Cc: "rhs-zteam" mailto:rhs-zt...@redhat.com>>
Sent: Tuesday, May 31, 2016 10:15:04 AM
Subject: dht mkdir preop check, afr and
(non-)readable afr subvols

Hi all,

  

Re: [Gluster-devel] Smoke is failing for patch 14512

2016-05-31 Thread Raghavendra Gowdappa
+gluster-infra

- Original Message -
> From: "Hari Gowtham" 
> To: "Shyam" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, June 1, 2016 11:11:56 AM
> Subject: Re: [Gluster-devel] Smoke is failing for patch 14512
> 
> Hi,
> 
> I'm seeing the smoke test failures too
> http://review.gluster.org/#/c/14540/3
> 
> 
> - Original Message -
> > From: "Shyam" 
> > To: "Aravinda" , "Gluster Devel"
> > 
> > Sent: Tuesday, May 31, 2016 7:21:38 PM
> > Subject: Re: [Gluster-devel] Smoke is failing for patch 14512
> > 
> > On 05/30/2016 04:02 AM, Aravinda wrote:
> > > Hi,
> > >
> > > Smoke is failing for the patch http://review.gluster.org/#/c/14512
> > >
> > > I am unable to guess the reason for failure, Please help.
> > > https://build.gluster.org/job/glusterfs-devrpms/16768/console
> > 
> > I got similar failures here,
> > 1) https://build.gluster.org/job/glusterfs-devrpms/16755/console
> > (slave25, as your failure is also on slave25)
> > 
> > What failed was,
> > # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/
> > --releasever 22 install @buildsys-build --setopt=tsflags=nocontexts
> > 
> > 2) https://build.gluster.org/job/glusterfs-devrpms/16762/console (slave24)
> > 
> > What failed here is slightly different,
> > # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/ -y
> > --releasever 22 update --setopt=tsflags=nocontexts
> > 
> > The console output shows, "Results and/or logs in:
> > /home/jenkins/root/workspace/glusterfs-devrpms/RPMS/fc22/x86_64/" and
> > "ERROR: Command failed. See logs for output.", but I guess these logs
> > are no longer available as this is a chroot env that is cleaned up post
> > the task, right?
> > 
> > I am just adding this to the list, as I do not know what the failure is
> > due to.
> > 
> > Any pointers anyone?
> > 
> > >
> > > --
> > > regards
> > > Aravinda
> > >
> > >
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
> --
> Regards,
> Hari.
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Smoke is failing for patch 14512

2016-05-31 Thread Hari Gowtham
Hi,

I'm seeing the smoke test failures too
http://review.gluster.org/#/c/14540/3


- Original Message -
> From: "Shyam" 
> To: "Aravinda" , "Gluster Devel" 
> 
> Sent: Tuesday, May 31, 2016 7:21:38 PM
> Subject: Re: [Gluster-devel] Smoke is failing for patch 14512
> 
> On 05/30/2016 04:02 AM, Aravinda wrote:
> > Hi,
> >
> > Smoke is failing for the patch http://review.gluster.org/#/c/14512
> >
> > I am unable to guess the reason for failure, Please help.
> > https://build.gluster.org/job/glusterfs-devrpms/16768/console
> 
> I got similar failures here,
> 1) https://build.gluster.org/job/glusterfs-devrpms/16755/console
> (slave25, as your failure is also on slave25)
> 
> What failed was,
> # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/
> --releasever 22 install @buildsys-build --setopt=tsflags=nocontexts
> 
> 2) https://build.gluster.org/job/glusterfs-devrpms/16762/console (slave24)
> 
> What failed here is slightly different,
> # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/ -y
> --releasever 22 update --setopt=tsflags=nocontexts
> 
> The console output shows, "Results and/or logs in:
> /home/jenkins/root/workspace/glusterfs-devrpms/RPMS/fc22/x86_64/" and
> "ERROR: Command failed. See logs for output.", but I guess these logs
> are no longer available as this is a chroot env that is cleaned up post
> the task, right?
> 
> I am just adding this to the list, as I do not know what the failure is
> due to.
> 
> Any pointers anyone?
> 
> >
> > --
> > regards
> > Aravinda
> >
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] review request - quota information mismatch which glusterfs on zfs environment

2016-05-31 Thread Sungsik Park
Hi all,

'quota information mismatch' problem occurs in the zfs underlying file
system environment.

request the code commits reviews for solving this problem.

* Red Hat Bugzilla - Bug 1341355 - quota information mismatch which
glusterfs on zfs environment


* for review: http://review.gluster.org/#/c/14593/

thanks.

-- 

-- 

Sungsik, Park [Corazy Park]

Software Development Engineer

Email: corazy.p...@gmail.com 




This email may be confidential and protected by legal privilege.

If you are not the intended recipient, disclosure, copying, distribution

and use are prohibited; please notify us immediately and delete this copy

from your system.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Pranith Kumar Karampuri
Xavi,
But if we keep winding only to good subvolumes, there is a case
where bad subvolumes will never catch up right? i.e. if we keep creating
files in same directory and everytime self-heal completes there are more
entries mounts would have created on the good subvolumes alone. I think I
must have missed this in the reviews if this is the current behavior. It
was not in the earlier releases. Right?

Pranith

On Tue, May 31, 2016 at 2:17 PM, Raghavendra G 
wrote:

>
>
> On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez 
> wrote:
>
>> Hi,
>>
>> On 31/05/16 07:05, Raghavendra Gowdappa wrote:
>>
>>> +gluster-devel, +Xavi
>>>
>>> Hi all,
>>>
>>> The context is [1], where bricks do pre-operation checks before doing a
>>> fop and proceed with fop only if pre-op check is successful.
>>>
>>> @Xavi,
>>>
>>> We need your inputs on behavior of EC subvolumes as well.
>>>
>>
>> If I understand correctly, EC shouldn't have any problems here.
>>
>> EC sends the mkdir request to all subvolumes that are currently
>> considered "good" and tries to combine the answers. Answers that match in
>> return code, errno (if necessary) and xdata contents (except for some
>> special xattrs that are ignored for combination purposes), are grouped.
>>
>> Then it takes the group with more members/answers. If that group has a
>> minimum size of #bricks - redundancy, it is considered the good answer.
>> Otherwise EIO is returned because bricks are in an inconsistent state.
>>
>> If there's any answer in another group, it's considered bad and gets
>> marked so that self-heal will repair it using the good information from the
>> majority of bricks.
>>
>> xdata is combined and returned even if return code is -1.
>>
>> Is that enough to cover the needed behavior ?
>>
>
> Thanks Xavi. That's sufficient for the feature in question. One of the
> main cases I was interested in was what would be the behaviour if mkdir
> succeeds on "bad" subvolume and fails on "good" subvolume. Since you never
> wind mkdir to "bad" subvolume(s), this situation never arises.
>
>
>
>>
>> Xavi
>>
>>
>>
>>> [1] http://review.gluster.org/13885
>>>
>>> regards,
>>> Raghavendra
>>>
>>> - Original Message -
>>>
 From: "Pranith Kumar Karampuri" 
 To: "Raghavendra Gowdappa" 
 Cc: "team-quine-afr" , "rhs-zteam" <
 rhs-zt...@redhat.com>
 Sent: Tuesday, May 31, 2016 10:22:49 AM
 Subject: Re: dht mkdir preop check, afr and (non-)readable afr subvols

 I think you should start a discussion on gluster-devel so that Xavi
 gets a
 chance to respond on the mails as well.

 On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa <
 rgowd...@redhat.com>
 wrote:

 Also note that we've plans to extend this pre-op check to all dentry
> operations which also depend parent layout. So, the discussion need to
> cover all dentry operations like:
>
> 1. create
> 2. mkdir
> 3. rmdir
> 4. mknod
> 5. symlink
> 6. unlink
> 7. rename
>
> We also plan to have similar checks in lock codepath for directories
> too
> (planning to use hashed-subvolume as lock-subvolume for directories).
> So,
> more fops :)
> 8. lk (posix locks)
> 9. inodelk
> 10. entrylk
>
> regards,
> Raghavendra
>
> - Original Message -
>
>> From: "Raghavendra Gowdappa" 
>> To: "team-quine-afr" 
>> Cc: "rhs-zteam" 
>> Sent: Tuesday, May 31, 2016 10:15:04 AM
>> Subject: dht mkdir preop check, afr and (non-)readable afr subvols
>>
>> Hi all,
>>
>> I have some queries related to the behavior of afr_mkdir with respect
>> to
>> readable subvols.
>>
>> 1. While winding mkdir to subvols does afr check whether the
>> subvolume is
>> good/readable? Or does it wind to all subvols irrespective of whether
>> a
>> subvol is good/bad? In the latter case, what if
>>a. mkdir succeeds on non-readable subvolume
>>b. fails on readable subvolume
>>
>>   What is the result reported to higher layers in the above scenario?
>> If
>>   mkdir is failed, is it cleaned up on non-readable subvolume where it
>>   failed?
>>
>> I am interested in this case as dht-preop check relies on layout
>> xattrs
>>
> and I
>
>> assume layout xattrs in particular (and all xattrs in general) are
>> guaranteed to be correct only on a readable subvolume of afr. So, in
>>
> essence
>
>> we shouldn't be winding down mkdir on non-readable subvols as whatever
>>
> the
>
>> decision brick makes as part of pre-op check is inherently flawed.
>>
>> regards,
>> Raghavendra
>>
> --
 Pranith

 ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
> ___

Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Raghavendra G
I've filed a bug at [1] to track issue in afr.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1341429

On Tue, May 31, 2016 at 2:17 PM, Raghavendra G 
wrote:

>
>
> On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez 
> wrote:
>
>> Hi,
>>
>> On 31/05/16 07:05, Raghavendra Gowdappa wrote:
>>
>>> +gluster-devel, +Xavi
>>>
>>> Hi all,
>>>
>>> The context is [1], where bricks do pre-operation checks before doing a
>>> fop and proceed with fop only if pre-op check is successful.
>>>
>>> @Xavi,
>>>
>>> We need your inputs on behavior of EC subvolumes as well.
>>>
>>
>> If I understand correctly, EC shouldn't have any problems here.
>>
>> EC sends the mkdir request to all subvolumes that are currently
>> considered "good" and tries to combine the answers. Answers that match in
>> return code, errno (if necessary) and xdata contents (except for some
>> special xattrs that are ignored for combination purposes), are grouped.
>>
>> Then it takes the group with more members/answers. If that group has a
>> minimum size of #bricks - redundancy, it is considered the good answer.
>> Otherwise EIO is returned because bricks are in an inconsistent state.
>>
>> If there's any answer in another group, it's considered bad and gets
>> marked so that self-heal will repair it using the good information from the
>> majority of bricks.
>>
>> xdata is combined and returned even if return code is -1.
>>
>> Is that enough to cover the needed behavior ?
>>
>
> Thanks Xavi. That's sufficient for the feature in question. One of the
> main cases I was interested in was what would be the behaviour if mkdir
> succeeds on "bad" subvolume and fails on "good" subvolume. Since you never
> wind mkdir to "bad" subvolume(s), this situation never arises.
>
>
>
>>
>> Xavi
>>
>>
>>
>>> [1] http://review.gluster.org/13885
>>>
>>> regards,
>>> Raghavendra
>>>
>>> - Original Message -
>>>
 From: "Pranith Kumar Karampuri" 
 To: "Raghavendra Gowdappa" 
 Cc: "team-quine-afr" , "rhs-zteam" <
 rhs-zt...@redhat.com>
 Sent: Tuesday, May 31, 2016 10:22:49 AM
 Subject: Re: dht mkdir preop check, afr and (non-)readable afr subvols

 I think you should start a discussion on gluster-devel so that Xavi
 gets a
 chance to respond on the mails as well.

 On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa <
 rgowd...@redhat.com>
 wrote:

 Also note that we've plans to extend this pre-op check to all dentry
> operations which also depend parent layout. So, the discussion need to
> cover all dentry operations like:
>
> 1. create
> 2. mkdir
> 3. rmdir
> 4. mknod
> 5. symlink
> 6. unlink
> 7. rename
>
> We also plan to have similar checks in lock codepath for directories
> too
> (planning to use hashed-subvolume as lock-subvolume for directories).
> So,
> more fops :)
> 8. lk (posix locks)
> 9. inodelk
> 10. entrylk
>
> regards,
> Raghavendra
>
> - Original Message -
>
>> From: "Raghavendra Gowdappa" 
>> To: "team-quine-afr" 
>> Cc: "rhs-zteam" 
>> Sent: Tuesday, May 31, 2016 10:15:04 AM
>> Subject: dht mkdir preop check, afr and (non-)readable afr subvols
>>
>> Hi all,
>>
>> I have some queries related to the behavior of afr_mkdir with respect
>> to
>> readable subvols.
>>
>> 1. While winding mkdir to subvols does afr check whether the
>> subvolume is
>> good/readable? Or does it wind to all subvols irrespective of whether
>> a
>> subvol is good/bad? In the latter case, what if
>>a. mkdir succeeds on non-readable subvolume
>>b. fails on readable subvolume
>>
>>   What is the result reported to higher layers in the above scenario?
>> If
>>   mkdir is failed, is it cleaned up on non-readable subvolume where it
>>   failed?
>>
>> I am interested in this case as dht-preop check relies on layout
>> xattrs
>>
> and I
>
>> assume layout xattrs in particular (and all xattrs in general) are
>> guaranteed to be correct only on a readable subvolume of afr. So, in
>>
> essence
>
>> we shouldn't be winding down mkdir on non-readable subvols as whatever
>>
> the
>
>> decision brick makes as part of pre-op check is inherently flawed.
>>
>> regards,
>> Raghavendra
>>
> --
 Pranith

 ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gerrit (review.gluster.org/git.gluster.org) downtime in 15 mins

2016-05-31 Thread Nigel Babu
Hello folks,

The upgrade is now complete! Please let me know if you notice anything
wrong.

On Wed, Jun 1, 2016 at 7:45 AM, Nigel Babu  wrote:

> Hello,
>
> I'll be bring down Gerrit for upgrade to the latest version. I'll update
> this thread when the upgrade is completed.
>
> --
> nigelb
>



-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gerrit (review.gluster.org/git.gluster.org) downtime in 15 mins

2016-05-31 Thread Nigel Babu
Hello,

I'll be bring down Gerrit for upgrade to the latest version. I'll update
this thread when the upgrade is completed.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Smoke is failing for patch 14512

2016-05-31 Thread Shyam

On 05/30/2016 04:02 AM, Aravinda wrote:

Hi,

Smoke is failing for the patch http://review.gluster.org/#/c/14512

I am unable to guess the reason for failure, Please help.
https://build.gluster.org/job/glusterfs-devrpms/16768/console


I got similar failures here,
1) https://build.gluster.org/job/glusterfs-devrpms/16755/console 
(slave25, as your failure is also on slave25)


What failed was,
# /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/ 
--releasever 22 install @buildsys-build --setopt=tsflags=nocontexts


2) https://build.gluster.org/job/glusterfs-devrpms/16762/console (slave24)

What failed here is slightly different,
# /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/ -y 
--releasever 22 update --setopt=tsflags=nocontexts


The console output shows, "Results and/or logs in: 
/home/jenkins/root/workspace/glusterfs-devrpms/RPMS/fc22/x86_64/" and 
"ERROR: Command failed. See logs for output.", but I guess these logs 
are no longer available as this is a chroot env that is cleaned up post 
the task, right?


I am just adding this to the list, as I do not know what the failure is 
due to.


Any pointers anyone?



--
regards
Aravinda



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: features/worm: updating function names & unwinding FOPs with...

2016-05-31 Thread Karthik Subrahmanya
Hi Jeff,

Thank you for your time and the valuable reviews.
I have addressed the review comments. Can you please have a look.

Thanks & Regards,
Karthik


- Original Message -
> From: "Jeff Darcy (Code Review)" 
> To: "Karthik U S" 
> Cc: "Gluster Build System" , "Niels de Vos" 
> , "NetBSD Build System"
> , "Raghavendra Talur" , 
> "Vijaikumar Mallikarjuna"
> , "Joseph Fernandes" 
> Sent: Friday, May 27, 2016 2:52:28 AM
> Subject: Change in glusterfs[master]: features/worm: updating function names 
> & unwinding FOPs with...
> 
> Jeff Darcy has posted comments on this change.
> 
> Change subject: features/worm: updating function names & unwinding FOPs with
> op_errno
> ..
> 
> 
> Patch Set 4:
> 
> (1 comment)
> 
> http://review.gluster.org/#/c/14222/4/xlators/features/read-only/src/worm.c
> File xlators/features/read-only/src/worm.c:
> 
> Line 83: goto out;
> > Done.
> I think we can - and should - do better.  We don't adhere to a strict "only
> return from one place" policy, precisely because sometimes the contortions
> needed to comply make the code even less readable.  Playing hopscotch among
> several goto labels certainly qualifies, and we can see several clues that
> simplification is still possible:
> 
> (a) Whether we call STACK_WIND or STACK_UNWIND always corresponds to whether
> op_errno is zero or not.  At 60 we unwind with non-zero.  At 63, 67, and 71
> we wind with zero.  At 74 we unwind with non-zero.
> 
> (b) After we've wound or unwound, we return op_errno even though it's always
> zero by then (from 68 or 76).  In other words, we don't actually need
> op_errno by the time we return.
> 
> These "coincidences" suggest that an approach similar to that used in other
> translators will work.
> 
> int op_errno = 0;
> 
> /* Example: error or failure. */
> if (is_readonly...) {
>   op_errno = EROFS;
>   goto out;
> }
> 
> /* Example: optimization or easy case. */
> if (is_wormfile...) {
>   goto out;
> }
> 
> /* Example: result from another function. */
> op_errno = gf_worm_state_transition...;
> 
>   out:
> 
> /* Common cleanup actions could go here... */
> 
> if (op_errno) {
>   STACK_UNWIND (..., -1, op_errno, ...);
> } else {
>   STACK_WIND (...);
> }
> 
> /* ...or here. */
> 
> return 0;
> 
> Sometimes this is flipped around, with ret/op_errno/whatever initially set to
> an error value and only set to zero when we're sure of success.  Which to
> use is mostly a matter of whether success or failure paths are more common.
> In any case, this makes our state explicit in op_errno.  It's easier to
> verify/ensure that we always wind on success and unwind (with a non-zero
> op_errno) on failure, and that we return zero either way.  We've had many
> bugs in other translators that were the result of "escaping" from a fop
> function with neither a wind nor an unwind, and those tend to be hard to
> debug.  Making it hard for such mistakes to creep in when another engineer
> modifies this code a year from now is very valuable.  Also, before anyone
> else assumes otherwise, we don't have Coverity or clang or any other kind of
> rules to detect those particular things automatically.
> 
> I know it's a pain, and it's late in the game, but this seems to be a
> technical-debt-reduction patch already (as opposed to a true bug fix) so
> let's reduce as much as we can at once instead of having to review and
> regression-test the same code twice.  BTW, the same pattern recurs in
> setattr/setfattr, and there's a typo (perfix/prefix) in the commit message.
> 
> 
> --
> To view, visit http://review.gluster.org/14222
> To unsubscribe, visit http://review.gluster.org/settings
> 
> Gerrit-MessageType: comment
> Gerrit-Change-Id: I3a2f114061aae4b422df54e91c4b3f702af5d0b0
> Gerrit-PatchSet: 4
> Gerrit-Project: glusterfs
> Gerrit-Branch: master
> Gerrit-Owner: Karthik U S 
> Gerrit-Reviewer: Gluster Build System 
> Gerrit-Reviewer: Jeff Darcy 
> Gerrit-Reviewer: Joseph Fernandes
> Gerrit-Reviewer: Joseph Fernandes 
> Gerrit-Reviewer: Karthik U S 
> Gerrit-Reviewer: NetBSD Build System 
> Gerrit-Reviewer: Niels de Vos 
> Gerrit-Reviewer: Raghavendra Talur 
> Gerrit-Reviewer: Vijaikumar Mallikarjuna 
> Gerrit-HasComments: Yes
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-05-31 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Raghavendra G
On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez 
wrote:

> Hi,
>
> On 31/05/16 07:05, Raghavendra Gowdappa wrote:
>
>> +gluster-devel, +Xavi
>>
>> Hi all,
>>
>> The context is [1], where bricks do pre-operation checks before doing a
>> fop and proceed with fop only if pre-op check is successful.
>>
>> @Xavi,
>>
>> We need your inputs on behavior of EC subvolumes as well.
>>
>
> If I understand correctly, EC shouldn't have any problems here.
>
> EC sends the mkdir request to all subvolumes that are currently considered
> "good" and tries to combine the answers. Answers that match in return code,
> errno (if necessary) and xdata contents (except for some special xattrs
> that are ignored for combination purposes), are grouped.
>
> Then it takes the group with more members/answers. If that group has a
> minimum size of #bricks - redundancy, it is considered the good answer.
> Otherwise EIO is returned because bricks are in an inconsistent state.
>
> If there's any answer in another group, it's considered bad and gets
> marked so that self-heal will repair it using the good information from the
> majority of bricks.
>
> xdata is combined and returned even if return code is -1.
>
> Is that enough to cover the needed behavior ?
>

Thanks Xavi. That's sufficient for the feature in question. One of the main
cases I was interested in was what would be the behaviour if mkdir succeeds
on "bad" subvolume and fails on "good" subvolume. Since you never wind
mkdir to "bad" subvolume(s), this situation never arises.



>
> Xavi
>
>
>
>> [1] http://review.gluster.org/13885
>>
>> regards,
>> Raghavendra
>>
>> - Original Message -
>>
>>> From: "Pranith Kumar Karampuri" 
>>> To: "Raghavendra Gowdappa" 
>>> Cc: "team-quine-afr" , "rhs-zteam" <
>>> rhs-zt...@redhat.com>
>>> Sent: Tuesday, May 31, 2016 10:22:49 AM
>>> Subject: Re: dht mkdir preop check, afr and (non-)readable afr subvols
>>>
>>> I think you should start a discussion on gluster-devel so that Xavi gets
>>> a
>>> chance to respond on the mails as well.
>>>
>>> On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa <
>>> rgowd...@redhat.com>
>>> wrote:
>>>
>>> Also note that we've plans to extend this pre-op check to all dentry
 operations which also depend parent layout. So, the discussion need to
 cover all dentry operations like:

 1. create
 2. mkdir
 3. rmdir
 4. mknod
 5. symlink
 6. unlink
 7. rename

 We also plan to have similar checks in lock codepath for directories too
 (planning to use hashed-subvolume as lock-subvolume for directories).
 So,
 more fops :)
 8. lk (posix locks)
 9. inodelk
 10. entrylk

 regards,
 Raghavendra

 - Original Message -

> From: "Raghavendra Gowdappa" 
> To: "team-quine-afr" 
> Cc: "rhs-zteam" 
> Sent: Tuesday, May 31, 2016 10:15:04 AM
> Subject: dht mkdir preop check, afr and (non-)readable afr subvols
>
> Hi all,
>
> I have some queries related to the behavior of afr_mkdir with respect
> to
> readable subvols.
>
> 1. While winding mkdir to subvols does afr check whether the subvolume
> is
> good/readable? Or does it wind to all subvols irrespective of whether a
> subvol is good/bad? In the latter case, what if
>a. mkdir succeeds on non-readable subvolume
>b. fails on readable subvolume
>
>   What is the result reported to higher layers in the above scenario?
> If
>   mkdir is failed, is it cleaned up on non-readable subvolume where it
>   failed?
>
> I am interested in this case as dht-preop check relies on layout xattrs
>
 and I

> assume layout xattrs in particular (and all xattrs in general) are
> guaranteed to be correct only on a readable subvolume of afr. So, in
>
 essence

> we shouldn't be winding down mkdir on non-readable subvols as whatever
>
 the

> decision brick makes as part of pre-op check is inherently flawed.
>
> regards,
> Raghavendra
>
 --
>>> Pranith
>>>
>>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Pranith Kumar Karampuri
Just checked ec code. Looks okay. All entry fops are also updating metadata
and data part of the xattr.

On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez 
wrote:

> Hi,
>
> On 31/05/16 07:05, Raghavendra Gowdappa wrote:
>
>> +gluster-devel, +Xavi
>>
>> Hi all,
>>
>> The context is [1], where bricks do pre-operation checks before doing a
>> fop and proceed with fop only if pre-op check is successful.
>>
>> @Xavi,
>>
>> We need your inputs on behavior of EC subvolumes as well.
>>
>
> If I understand correctly, EC shouldn't have any problems here.
>
> EC sends the mkdir request to all subvolumes that are currently considered
> "good" and tries to combine the answers. Answers that match in return code,
> errno (if necessary) and xdata contents (except for some special xattrs
> that are ignored for combination purposes), are grouped.
>
> Then it takes the group with more members/answers. If that group has a
> minimum size of #bricks - redundancy, it is considered the good answer.
> Otherwise EIO is returned because bricks are in an inconsistent state.
>
> If there's any answer in another group, it's considered bad and gets
> marked so that self-heal will repair it using the good information from the
> majority of bricks.
>
> xdata is combined and returned even if return code is -1.
>
> Is that enough to cover the needed behavior ?
>
> Xavi
>
>
>
>> [1] http://review.gluster.org/13885
>>
>> regards,
>> Raghavendra
>>
>> - Original Message -
>>
>>> From: "Pranith Kumar Karampuri" 
>>> To: "Raghavendra Gowdappa" 
>>> Cc: "team-quine-afr" , "rhs-zteam" <
>>> rhs-zt...@redhat.com>
>>> Sent: Tuesday, May 31, 2016 10:22:49 AM
>>> Subject: Re: dht mkdir preop check, afr and (non-)readable afr subvols
>>>
>>> I think you should start a discussion on gluster-devel so that Xavi gets
>>> a
>>> chance to respond on the mails as well.
>>>
>>> On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa <
>>> rgowd...@redhat.com>
>>> wrote:
>>>
>>> Also note that we've plans to extend this pre-op check to all dentry
 operations which also depend parent layout. So, the discussion need to
 cover all dentry operations like:

 1. create
 2. mkdir
 3. rmdir
 4. mknod
 5. symlink
 6. unlink
 7. rename

 We also plan to have similar checks in lock codepath for directories too
 (planning to use hashed-subvolume as lock-subvolume for directories).
 So,
 more fops :)
 8. lk (posix locks)
 9. inodelk
 10. entrylk

 regards,
 Raghavendra

 - Original Message -

> From: "Raghavendra Gowdappa" 
> To: "team-quine-afr" 
> Cc: "rhs-zteam" 
> Sent: Tuesday, May 31, 2016 10:15:04 AM
> Subject: dht mkdir preop check, afr and (non-)readable afr subvols
>
> Hi all,
>
> I have some queries related to the behavior of afr_mkdir with respect
> to
> readable subvols.
>
> 1. While winding mkdir to subvols does afr check whether the subvolume
> is
> good/readable? Or does it wind to all subvols irrespective of whether a
> subvol is good/bad? In the latter case, what if
>a. mkdir succeeds on non-readable subvolume
>b. fails on readable subvolume
>
>   What is the result reported to higher layers in the above scenario?
> If
>   mkdir is failed, is it cleaned up on non-readable subvolume where it
>   failed?
>
> I am interested in this case as dht-preop check relies on layout xattrs
>
 and I

> assume layout xattrs in particular (and all xattrs in general) are
> guaranteed to be correct only on a readable subvolume of afr. So, in
>
 essence

> we shouldn't be winding down mkdir on non-readable subvols as whatever
>
 the

> decision brick makes as part of pre-op check is inherently flawed.
>
> regards,
> Raghavendra
>
 --
>>> Pranith
>>>
>>>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Prasanna Kalever
Hi Kotresh,

This is where I was peeping in
http://review.nigelb.me/#/c/14346/1/xlators/features/index/src/index.c

May be this patch could have been posted before upgrade ?


Thanks,
--
Prasanna


On Tue, May 31, 2016 at 12:39 PM, Kotresh Hiremath Ravishankar
 wrote:
> Hi Prasanna,
>
> 'Fix' button is visible. May be you are missing something, please check.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
>> From: "Prasanna Kalever" 
>> To: "Nigel Babu" 
>> Cc: "gluster-infra" , "gluster-devel" 
>> 
>> Sent: Tuesday, May 31, 2016 12:13:47 PM
>> Subject: Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2
>>
>> Hi Nigel,
>>
>> I don't see 'Fix' button in the comment section which is "fix for a
>> remote code execution exploit" introduced in 2.12.2, it helps us in
>> editing the code in the gerrit web editor instantaneously, hence we
>> don't have to cherry pick the patch every time to address minor code
>> changes.
>>
>> I feel that is really helpful for the developers to address comments
>> faster and easier.
>>
>> Please see [1], it also has attachments showing how this looks
>>
>> [1] http://www.gluster.org/pipermail/gluster-devel/2016-May/049429.html
>>
>>
>> Thanks,
>> --
>> Prasanna
>>
>> On Tue, May 31, 2016 at 10:39 AM, Nigel Babu  wrote:
>> > Hello,
>> >
>> > A reminder: I'm hoping to get this done tomorrow morning at 0230 GMT[1].
>> > I'll have a backup ready in case something goes wrong. I've tested this
>> > process on review.nigelb.me and it's gone reasonably smoothly.
>> >
>> > [1]:
>> > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Maintenance&iso=20160601T08&p1=176&ah=1
>> >
>> > On Mon, May 30, 2016 at 7:26 PM, Nigel Babu  wrote:
>> >>
>> >> Hello,
>> >>
>> >> I've now upgraded Gerrit on http://review.nigelb.me to 2.12.2. Please
>> >> spend a few minutes testing that everything works as you expect it to. If
>> >> I
>> >> don't hear anything negative by tomorrow, I'd like to schedule an upgrade
>> >> this week.
>> >>
>> >> --
>> >> nigelb
>> >
>> >
>> >
>> >
>> > --
>> > nigelb
>> >
>> > ___
>> > Gluster-infra mailing list
>> > gluster-in...@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-infra
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Nigel Babu
Hello,

I'm guessing there's a particular combination of permissions you need to
see
the Fix button. I don't see it myself, possibly because I don't have any
changes made by me nor has my user been marked as a reviewer for any commit.

The security issue and the "Fix" button are unrelated as far as I know. The
security issue was fixed in apache-commons collections rather than Gerrit
code.

Thank you for testing! :)

On Tue, May 31, 2016 at 12:42 PM, Anoop C S  wrote:

> On Tue, 2016-05-31 at 03:09 -0400, Kotresh Hiremath Ravishankar wrote:
> > Hi Prasanna,
> >
> > 'Fix' button is visible. May be you are missing something, please
> > check.
> >
>
> +1
>
> > Thanks and Regards,
> > Kotresh H R
> >
> > - Original Message -
> > >
> > > From: "Prasanna Kalever" 
> > > To: "Nigel Babu" 
> > > Cc: "gluster-infra" , "gluster-devel"  > > luster-de...@gluster.org>
> > > Sent: Tuesday, May 31, 2016 12:13:47 PM
> > > Subject: Re: [Gluster-devel] [Gluster-infra] Please test Gerrit
> > > 2.12.2
> > >
> > > Hi Nigel,
> > >
> > > I don't see 'Fix' button in the comment section which is "fix for a
> > > remote code execution exploit" introduced in 2.12.2, it helps us in
> > > editing the code in the gerrit web editor instantaneously, hence we
> > > don't have to cherry pick the patch every time to address minor
> > > code
> > > changes.
> > >
> > > I feel that is really helpful for the developers to address
> > > comments
> > > faster and easier.
> > >
> > > Please see [1], it also has attachments showing how this looks
> > >
> > > [1] http://www.gluster.org/pipermail/gluster-devel/2016-May/049429.
> > > html
> > >
> > >
> > > Thanks,
> > > --
> > > Prasanna
> > >
> > > On Tue, May 31, 2016 at 10:39 AM, Nigel Babu 
> > > wrote:
> > > >
> > > > Hello,
> > > >
> > > > A reminder: I'm hoping to get this done tomorrow morning at 0230
> > > > GMT[1].
> > > > I'll have a backup ready in case something goes wrong. I've
> > > > tested this
> > > > process on review.nigelb.me and it's gone reasonably smoothly.
> > > >
> > > > [1]:
> > > > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Maintena
> > > > nce&iso=20160601T08&p1=176&ah=1
> > > >
> > > > On Mon, May 30, 2016 at 7:26 PM, Nigel Babu 
> > > > wrote:
> > > > >
> > > > >
> > > > > Hello,
> > > > >
> > > > > I've now upgraded Gerrit on http://review.nigelb.me to 2.12.2.
> > > > > Please
> > > > > spend a few minutes testing that everything works as you expect
> > > > > it to. If
> > > > > I
> > > > > don't hear anything negative by tomorrow, I'd like to schedule
> > > > > an upgrade
> > > > > this week.
> > > > >
> > > > > --
> > > > > nigelb
> > > >
> > > >
> > > >
> > > > --
> > > > nigelb
> > > >
> > > > ___
> > > > Gluster-infra mailing list
> > > > gluster-in...@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-infra
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
>



-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Anoop C S
On Tue, 2016-05-31 at 03:09 -0400, Kotresh Hiremath Ravishankar wrote:
> Hi Prasanna,
> 
> 'Fix' button is visible. May be you are missing something, please
> check.
> 

+1

> Thanks and Regards,
> Kotresh H R
> 
> - Original Message -
> > 
> > From: "Prasanna Kalever" 
> > To: "Nigel Babu" 
> > Cc: "gluster-infra" , "gluster-devel"  > luster-de...@gluster.org>
> > Sent: Tuesday, May 31, 2016 12:13:47 PM
> > Subject: Re: [Gluster-devel] [Gluster-infra] Please test Gerrit
> > 2.12.2
> > 
> > Hi Nigel,
> > 
> > I don't see 'Fix' button in the comment section which is "fix for a
> > remote code execution exploit" introduced in 2.12.2, it helps us in
> > editing the code in the gerrit web editor instantaneously, hence we
> > don't have to cherry pick the patch every time to address minor
> > code
> > changes.
> > 
> > I feel that is really helpful for the developers to address
> > comments
> > faster and easier.
> > 
> > Please see [1], it also has attachments showing how this looks
> > 
> > [1] http://www.gluster.org/pipermail/gluster-devel/2016-May/049429.
> > html
> > 
> > 
> > Thanks,
> > --
> > Prasanna
> > 
> > On Tue, May 31, 2016 at 10:39 AM, Nigel Babu 
> > wrote:
> > > 
> > > Hello,
> > > 
> > > A reminder: I'm hoping to get this done tomorrow morning at 0230
> > > GMT[1].
> > > I'll have a backup ready in case something goes wrong. I've
> > > tested this
> > > process on review.nigelb.me and it's gone reasonably smoothly.
> > > 
> > > [1]:
> > > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Maintena
> > > nce&iso=20160601T08&p1=176&ah=1
> > > 
> > > On Mon, May 30, 2016 at 7:26 PM, Nigel Babu 
> > > wrote:
> > > > 
> > > > 
> > > > Hello,
> > > > 
> > > > I've now upgraded Gerrit on http://review.nigelb.me to 2.12.2.
> > > > Please
> > > > spend a few minutes testing that everything works as you expect
> > > > it to. If
> > > > I
> > > > don't hear anything negative by tomorrow, I'd like to schedule
> > > > an upgrade
> > > > this week.
> > > > 
> > > > --
> > > > nigelb
> > > 
> > > 
> > > 
> > > --
> > > nigelb
> > > 
> > > ___
> > > Gluster-infra mailing list
> > > gluster-in...@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-infra
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Kotresh Hiremath Ravishankar
Hi Prasanna,

'Fix' button is visible. May be you are missing something, please check.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Prasanna Kalever" 
> To: "Nigel Babu" 
> Cc: "gluster-infra" , "gluster-devel" 
> 
> Sent: Tuesday, May 31, 2016 12:13:47 PM
> Subject: Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2
> 
> Hi Nigel,
> 
> I don't see 'Fix' button in the comment section which is "fix for a
> remote code execution exploit" introduced in 2.12.2, it helps us in
> editing the code in the gerrit web editor instantaneously, hence we
> don't have to cherry pick the patch every time to address minor code
> changes.
> 
> I feel that is really helpful for the developers to address comments
> faster and easier.
> 
> Please see [1], it also has attachments showing how this looks
> 
> [1] http://www.gluster.org/pipermail/gluster-devel/2016-May/049429.html
> 
> 
> Thanks,
> --
> Prasanna
> 
> On Tue, May 31, 2016 at 10:39 AM, Nigel Babu  wrote:
> > Hello,
> >
> > A reminder: I'm hoping to get this done tomorrow morning at 0230 GMT[1].
> > I'll have a backup ready in case something goes wrong. I've tested this
> > process on review.nigelb.me and it's gone reasonably smoothly.
> >
> > [1]:
> > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Maintenance&iso=20160601T08&p1=176&ah=1
> >
> > On Mon, May 30, 2016 at 7:26 PM, Nigel Babu  wrote:
> >>
> >> Hello,
> >>
> >> I've now upgraded Gerrit on http://review.nigelb.me to 2.12.2. Please
> >> spend a few minutes testing that everything works as you expect it to. If
> >> I
> >> don't hear anything negative by tomorrow, I'd like to schedule an upgrade
> >> this week.
> >>
> >> --
> >> nigelb
> >
> >
> >
> >
> > --
> > nigelb
> >
> > ___
> > Gluster-infra mailing list
> > gluster-in...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] dht mkdir preop check, afr and (non-)readable afr subvols

2016-05-31 Thread Xavier Hernandez

Hi,

On 31/05/16 07:05, Raghavendra Gowdappa wrote:

+gluster-devel, +Xavi

Hi all,

The context is [1], where bricks do pre-operation checks before doing a fop and 
proceed with fop only if pre-op check is successful.

@Xavi,

We need your inputs on behavior of EC subvolumes as well.


If I understand correctly, EC shouldn't have any problems here.

EC sends the mkdir request to all subvolumes that are currently 
considered "good" and tries to combine the answers. Answers that match 
in return code, errno (if necessary) and xdata contents (except for some 
special xattrs that are ignored for combination purposes), are grouped.


Then it takes the group with more members/answers. If that group has a 
minimum size of #bricks - redundancy, it is considered the good answer. 
Otherwise EIO is returned because bricks are in an inconsistent state.


If there's any answer in another group, it's considered bad and gets 
marked so that self-heal will repair it using the good information from 
the majority of bricks.


xdata is combined and returned even if return code is -1.

Is that enough to cover the needed behavior ?

Xavi



[1] http://review.gluster.org/13885

regards,
Raghavendra

- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Raghavendra Gowdappa" 
Cc: "team-quine-afr" , "rhs-zteam" 

Sent: Tuesday, May 31, 2016 10:22:49 AM
Subject: Re: dht mkdir preop check, afr and (non-)readable afr subvols

I think you should start a discussion on gluster-devel so that Xavi gets a
chance to respond on the mails as well.

On Tue, May 31, 2016 at 10:21 AM, Raghavendra Gowdappa 
wrote:


Also note that we've plans to extend this pre-op check to all dentry
operations which also depend parent layout. So, the discussion need to
cover all dentry operations like:

1. create
2. mkdir
3. rmdir
4. mknod
5. symlink
6. unlink
7. rename

We also plan to have similar checks in lock codepath for directories too
(planning to use hashed-subvolume as lock-subvolume for directories). So,
more fops :)
8. lk (posix locks)
9. inodelk
10. entrylk

regards,
Raghavendra

- Original Message -

From: "Raghavendra Gowdappa" 
To: "team-quine-afr" 
Cc: "rhs-zteam" 
Sent: Tuesday, May 31, 2016 10:15:04 AM
Subject: dht mkdir preop check, afr and (non-)readable afr subvols

Hi all,

I have some queries related to the behavior of afr_mkdir with respect to
readable subvols.

1. While winding mkdir to subvols does afr check whether the subvolume is
good/readable? Or does it wind to all subvols irrespective of whether a
subvol is good/bad? In the latter case, what if
   a. mkdir succeeds on non-readable subvolume
   b. fails on readable subvolume

  What is the result reported to higher layers in the above scenario? If
  mkdir is failed, is it cleaned up on non-readable subvolume where it
  failed?

I am interested in this case as dht-preop check relies on layout xattrs

and I

assume layout xattrs in particular (and all xattrs in general) are
guaranteed to be correct only on a readable subvolume of afr. So, in

essence

we shouldn't be winding down mkdir on non-readable subvols as whatever

the

decision brick makes as part of pre-op check is inherently flawed.

regards,
Raghavendra

--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel