Re: [Gluster-devel] [Gluster-Maintainers] Maintainers 2.0 Proposal

2017-07-20 Thread Amar Tumballi
On Thu, Jun 29, 2017 at 10:58 AM, Amar Tumballi  wrote:

> All,
>
> Thanks for participating actively in the discussions. With all your
> co-operation we now have a update on maintainers 2.0 proposal. Vijay Bellur
> sent a patch last week [1] capturing all the discussions.
>
> Please go through the patch and see if you have any more concerns. There
> are many new names in there, so just review it so you can Ack it. Niels
> (ndevos) added all the people with their name on maintainers file as
> reviewers for the patch. Please take some time today and give +1 to it to
> acknowledge you are aware of the responsibilities. After 20 or more +1 on
> the patch, we will merge the patch, and accordingly raise a ticket to
> update the access to merge rights etc.
>
> Also, if your name is added in maintainers list (even as peer for
> component), please become member of Maintainers mailing list [2] This list
> is an open list (all archives available for anyone to read) so make sure
> you subscribe and become members.  Make sure you update your calendars with
> maintainer meeting timings, so you can attend it.
>
> [1] - https://review.gluster.org/17583
> [2] - http://lists.gluster.org/mailman/listinfo/maintainers
>
> Main maintainers 2.0 proposal link: https://hackmd.io/s/SkwiZd4qe
>
>
Thanks everyone. This activity is now complete. I have also raised a bug to
get the merge access to relevant maintainers on their components [3].

Every new maintainers, please make a note to become member of the mailing
list mentioned above this week, so you all can participate in the bi-weekly
maintainers' meeting.

Regards,
Amar

[3] - https://bugzilla.redhat.com/show_bug.cgi?id=1473525



> Write back if you have any more concerns.
>
> Regards,
> Amar
>
>
>
> On Tue, Apr 18, 2017 at 2:27 PM, Michael Scherer 
> wrote:
>
>> Le mardi 18 avril 2017 à 10:25 +0200, Niels de Vos a écrit :
>> > On Mon, Apr 17, 2017 at 04:53:55PM -0700, Amye Scavarda wrote:
>> > > On Fri, Apr 14, 2017 at 2:40 AM, Michael Scherer > >
>> > > wrote:
>> > >
>> > > > Le jeudi 13 avril 2017 à 18:01 -0700, Amye Scavarda a écrit :
>> > > > > In light of community conversations, I've put some revisions on
>> the
>> > > > > Maintainers changes, outlined in the hackmd pad:
>> > > > > https://hackmd.io/s/SkwiZd4qe
>> > > > >
>> > > > > Feedback welcomed!
>> > > > >
>> > > > > Note that the goals of this are to expand out our reach as a
>> project
>> > > > > (Gluster.org) and make it easy to define who's a maintainer for
>> what
>> > > > > feature.
>> > > > > I'll highlight the goals in the document here:
>> > > > >
>> > > > > * Refine how we declare component owners in Gluster
>> > > > > * Create a deeper sense of ownership throughout the open source
>> project
>> > > > > * Welcome more contibutors at a project impacting level
>> > > > >
>> > > > > We've clarified what the levels of 'owners' and 'peers' are in
>> terms of
>> > > > > responsibility, and we'll look to implement this in the 3.12
>> cycle.
>> > > > > Thanks!
>> > > >
>> > > > So, I realize that the concept of component is not defined in the
>> > > > document. I assume everybody have a shared understanding about what
>> it
>> > > > is, but maybe not, so wouldn't it make sense to define it more
>> clearly ?
>> > > >
>> > > > Is this planned to be done later as part of "We will be working on
>> > > > carving out new components for things that make logical sense." ?
>> > > >
>> > > > As for example, with regard to my previous comment, would
>> > > > "infrastructure" be a component, would "documentation" be a
>> component ?
>> > > >
>> > > > My understanding is that there's a working spreadsheet being
>> refined to
>> > > sort out what's an area that needs a maintainer defined, and what's
>> > > something that maybe doesn't need a named maintainer. Documentation
>> is a
>> > > tricky place to get to, because that's something that you do just
>> naturally
>> > > so that future-you doesn't hate current-you.
>> >
>> > I agree that documentation should be part of the standard development
>> > workflow. Unfortunately, this is not something that gets done without
>> > reminding everyone about it. We still need maintainers/owners to bug
>> > developers for documentation of new features, monitor the pull-request
>> > queue and decide if the documentation is written in an acceptible way.
>>
>> There is also the overall issue iof documentation consistency. For
>> example, style, glossary, etc, all kind of stuff that shouldn't be per
>> component but aligned overall.
>>
>> > The maintenance of the gluster.readthedocs.io site might be a
>> > infrastructure task?
>>
>> Wouldn't it be more logical to have it managed by the people that did
>> champion RTD ? I am unable to find the discussions about it, but I am
>> quite sure I had some concerns regarding RTD and wouldn't volunteer to
>> maintain something where I had objections (such as "being unable to fix
>> 

[Gluster-devel] Regards to highlight the change in behavior of metadata operation for directory

2017-07-20 Thread Mohit Agrawal
Hi All,

   As we know in glusterfs we have one known issue specific to preserve
metadata consistency for directory in
   pure distributed environment.

Problem:

   After brick stop if application do any metadata operation(like
permission,uid/gid change, xattr change) on directory and restart
   the brick, metadata is not consistent on stopped brick because dht is
not able to heal it properly.We have tried to fix
   this problem from the patch https://review.gluster.org/#/c/15468/ and we
would like to highlight the change in behavior after
   merge this patch.

Solution:
   This patch will change the current behavior of metadata operation on
directory.

   As of now we don't restrict application to run any metadata operation on
directory if brick is stopped even brick is belong to the hash
   sub-volume for that directory but after apply this patch we would not
allow metadata operation(attribute,xattr change) on directory
   if hashed subvolume (MDS) is down for that directory.

   After merge this patch we will consider hash sub-volume as a MDS for
that directory  and before wind any fop call at dht level to next xlator
  we do check the status of MDS, if stopped brick is the MDS of that
directory we will not allow to run the operation on directory otherwise
  dht wind a call to next xlator.



Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #98

2017-07-20 Thread Atin Mukherjee
Netbsd runs don't go through. add-brick-self-heal.t seems to be generating
core.

-- Forwarded message -
From: 
Date: Fri, 21 Jul 2017 at 06:20
Subject: [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #98
To: , , , <
vbel...@redhat.com>


See <
https://build.gluster.org/job/netbsd-periodic/98/display/redirect?page=changes
>

Changes:

[Jeff Darcy] glusterd: fix brick start race

[Vijay Bellur] MAINTAINERS: Changes for Maintainers 2.0

--
[...truncated 241.43 KB...]
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or
kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | 

Re: [Gluster-devel] What would be ideal option for 'auth.allow' to support subdir mount?

2017-07-20 Thread Amar Tumballi
On Thu, Jul 20, 2017 at 9:21 PM, Niels de Vos  wrote:

> On Thu, Jul 20, 2017 at 08:25:23PM +0530, Amar Tumballi wrote:
> > On Thu, Jul 20, 2017 at 7:36 PM, Niels de Vos  wrote:
> >
> > > On Thu, Jul 20, 2017 at 07:11:29PM +0530, Amar Tumballi wrote:
> > > > Hi,
> > > >
> > > > I was working on subdir mount for fuse clients [1], and was able to
> > > handle
> > > > pieces just fine in filesystem part of gluster. [2]
> > > >
> > > > What is pending is, how will we handle the authentication options for
> > > this
> > > > at each subdir level?
> > > >
> > > > I propose to keep the current option and extending it to handle new
> > > feature
> > > > with proper backward compatibility.
> > > >
> > > > Currently, the option auth.allow (and auth.reject) are of the type
> > > > GF_OPTION_TYPE_INTERNET_ADDRESS_LIST. Which expects valid internet
> > > > addresses with comma separation.
> > > >
> > > > For example the current option looks likes this:
> > > >
> > > >  'option auth.addr.brick-name.allow *' OR 'option
> > > > auth.addr.brick-name.allow "192.168.*.* ,10.10.*.*"'.
> > > >
> > > > In future, it may look like:
> > > >
> > > > `option auth.addr.brick-name.allow "10.0.1.13;192.168.1.*
> > > > =/subdir1;192.168.10.* ,192.168.11.104 =/subdir2"`
> > > >
> > > > so each entries will be separated by ';'. And in each entry, first
> part
> > > ("
> > > > =") is address list and second part is directory. If directory is
> empty,
> > > > its assumed as '/'. (Handles the backward compatibility). And if
> there is
> > > > no entry for a $subdir here, that $subdir won't be mountable.
> > >
> > > IIRC Gluster/NFS allows you to set permissions for subdir mounting with
> > > a format like this:
> > >
> > >   /subdir/next/dir(IP,IP-range,...) /subdir2(IP)
> > >
> > > This is good, but would currently break the compatibility with existing
> > auth.allow of gluster.
> >
> > Backward compatibility was the main reason for me to consider the above
> > approach.
> >
> > It would be best to use the existing format if we can to prevent
> > > confusion among our users.
> > >
> > > Currently existing gluster's option is not same as NFS in my opinion.
> How
> > do you want to handle it?
>
> I'm wondering if the current format that us used for NFS is not
> sufficient? Some defaults and quircks that would apply:
>
>
Should be sufficient. Earlier I was not sure of which option you were
talking.

For everyone's clarity, I assume Niels is talking about 'nfs3.*.export-dir'
option in xlators/nfs/server/src/nfs.c.

It is of the form:  /foo(192.168.1.0/24|host1|10.1.1.8),/host2.

[(hostspec[|hostspec|...])][,...]

But point to note here, it is of form GF_OPTION_STR, which means there
won't be any validation for this key, unline current gluster's
server-protocol auth.allow, which check for valid_hostname during gluster
volume set itself.

I am fine to support this format for auth-allow too, by handling current
format as special case for backward compatibility. I will give others time
till Monday before confirming this and going ahead with implementation.
Suggest other valid options and reason if this is not enough.

Regards,
Amar


>  - if an entry does not start with "/", assume it is an IP/host/... and
>apply the restriction to the whole volume
>  - separator between entries can be either " " or "," or a combination
>
> It would be good not to break any of the current accepted formats, and
> make them equal if we can.
>
> Do you see a problem with this that I might have missed?
> Niels
>
>
> >
> > -Amar
> >
> >
> > > Thanks,
> > > Niels
> > >
> > >
> > > >
> > > > (The above format is handled properly already at [2] in addr.c, the
> > > pending
> > > > thing is to handle the option properly in options.c's validate).
> > > >
> > > > [1] - https://github.com/gluster/glusterfs/issues/175
> > > > [2] - https://review.gluster.org/17141
> > > >
> > > > If everyone agrees to this, I guess we can pull it off before
> absolute
> > > > feature freeze date for 3.12 branch.
> > > >
> > > > Let me know the feedback. (I am updating the same content in github,
> so
> > > > feel free to comment there too).
> > > >
> > > > NOTE: I thought of using ':' (colon) as field separator between
> addr_list
> > > > and subdir entry, but with IPv6 ':' is valid character in string.
> Hence
> > > > using ' ='.
> > > > --
> > > > Amar Tumballi (amarts)
> > >
> > > > ___
> > > > Gluster-devel mailing list
> > > > Gluster-devel@gluster.org
> > > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
> > >
> >
> >
> > --
> > Amar Tumballi (amarts)
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What would be ideal option for 'auth.allow' to support subdir mount?

2017-07-20 Thread Niels de Vos
On Thu, Jul 20, 2017 at 08:25:23PM +0530, Amar Tumballi wrote:
> On Thu, Jul 20, 2017 at 7:36 PM, Niels de Vos  wrote:
> 
> > On Thu, Jul 20, 2017 at 07:11:29PM +0530, Amar Tumballi wrote:
> > > Hi,
> > >
> > > I was working on subdir mount for fuse clients [1], and was able to
> > handle
> > > pieces just fine in filesystem part of gluster. [2]
> > >
> > > What is pending is, how will we handle the authentication options for
> > this
> > > at each subdir level?
> > >
> > > I propose to keep the current option and extending it to handle new
> > feature
> > > with proper backward compatibility.
> > >
> > > Currently, the option auth.allow (and auth.reject) are of the type
> > > GF_OPTION_TYPE_INTERNET_ADDRESS_LIST. Which expects valid internet
> > > addresses with comma separation.
> > >
> > > For example the current option looks likes this:
> > >
> > >  'option auth.addr.brick-name.allow *' OR 'option
> > > auth.addr.brick-name.allow "192.168.*.* ,10.10.*.*"'.
> > >
> > > In future, it may look like:
> > >
> > > `option auth.addr.brick-name.allow "10.0.1.13;192.168.1.*
> > > =/subdir1;192.168.10.* ,192.168.11.104 =/subdir2"`
> > >
> > > so each entries will be separated by ';'. And in each entry, first part
> > ("
> > > =") is address list and second part is directory. If directory is empty,
> > > its assumed as '/'. (Handles the backward compatibility). And if there is
> > > no entry for a $subdir here, that $subdir won't be mountable.
> >
> > IIRC Gluster/NFS allows you to set permissions for subdir mounting with
> > a format like this:
> >
> >   /subdir/next/dir(IP,IP-range,...) /subdir2(IP)
> >
> > This is good, but would currently break the compatibility with existing
> auth.allow of gluster.
> 
> Backward compatibility was the main reason for me to consider the above
> approach.
> 
> It would be best to use the existing format if we can to prevent
> > confusion among our users.
> >
> > Currently existing gluster's option is not same as NFS in my opinion. How
> do you want to handle it?

I'm wondering if the current format that us used for NFS is not
sufficient? Some defaults and quircks that would apply:

 - if an entry does not start with "/", assume it is an IP/host/... and
   apply the restriction to the whole volume
 - separator between entries can be either " " or "," or a combination

It would be good not to break any of the current accepted formats, and
make them equal if we can.

Do you see a problem with this that I might have missed?
Niels


> 
> -Amar
> 
> 
> > Thanks,
> > Niels
> >
> >
> > >
> > > (The above format is handled properly already at [2] in addr.c, the
> > pending
> > > thing is to handle the option properly in options.c's validate).
> > >
> > > [1] - https://github.com/gluster/glusterfs/issues/175
> > > [2] - https://review.gluster.org/17141
> > >
> > > If everyone agrees to this, I guess we can pull it off before absolute
> > > feature freeze date for 3.12 branch.
> > >
> > > Let me know the feedback. (I am updating the same content in github, so
> > > feel free to comment there too).
> > >
> > > NOTE: I thought of using ':' (colon) as field separator between addr_list
> > > and subdir entry, but with IPv6 ':' is valid character in string. Hence
> > > using ' ='.
> > > --
> > > Amar Tumballi (amarts)
> >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> 
> 
> -- 
> Amar Tumballi (amarts)


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What would be ideal option for 'auth.allow' to support subdir mount?

2017-07-20 Thread Amar Tumballi
On Thu, Jul 20, 2017 at 7:36 PM, Niels de Vos  wrote:

> On Thu, Jul 20, 2017 at 07:11:29PM +0530, Amar Tumballi wrote:
> > Hi,
> >
> > I was working on subdir mount for fuse clients [1], and was able to
> handle
> > pieces just fine in filesystem part of gluster. [2]
> >
> > What is pending is, how will we handle the authentication options for
> this
> > at each subdir level?
> >
> > I propose to keep the current option and extending it to handle new
> feature
> > with proper backward compatibility.
> >
> > Currently, the option auth.allow (and auth.reject) are of the type
> > GF_OPTION_TYPE_INTERNET_ADDRESS_LIST. Which expects valid internet
> > addresses with comma separation.
> >
> > For example the current option looks likes this:
> >
> >  'option auth.addr.brick-name.allow *' OR 'option
> > auth.addr.brick-name.allow "192.168.*.* ,10.10.*.*"'.
> >
> > In future, it may look like:
> >
> > `option auth.addr.brick-name.allow "10.0.1.13;192.168.1.*
> > =/subdir1;192.168.10.* ,192.168.11.104 =/subdir2"`
> >
> > so each entries will be separated by ';'. And in each entry, first part
> ("
> > =") is address list and second part is directory. If directory is empty,
> > its assumed as '/'. (Handles the backward compatibility). And if there is
> > no entry for a $subdir here, that $subdir won't be mountable.
>
> IIRC Gluster/NFS allows you to set permissions for subdir mounting with
> a format like this:
>
>   /subdir/next/dir(IP,IP-range,...) /subdir2(IP)
>
> This is good, but would currently break the compatibility with existing
auth.allow of gluster.

Backward compatibility was the main reason for me to consider the above
approach.

It would be best to use the existing format if we can to prevent
> confusion among our users.
>
> Currently existing gluster's option is not same as NFS in my opinion. How
do you want to handle it?

-Amar


> Thanks,
> Niels
>
>
> >
> > (The above format is handled properly already at [2] in addr.c, the
> pending
> > thing is to handle the option properly in options.c's validate).
> >
> > [1] - https://github.com/gluster/glusterfs/issues/175
> > [2] - https://review.gluster.org/17141
> >
> > If everyone agrees to this, I guess we can pull it off before absolute
> > feature freeze date for 3.12 branch.
> >
> > Let me know the feedback. (I am updating the same content in github, so
> > feel free to comment there too).
> >
> > NOTE: I thought of using ':' (colon) as field separator between addr_list
> > and subdir entry, but with IPv6 ':' is valid character in string. Hence
> > using ' ='.
> > --
> > Amar Tumballi (amarts)
>
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What would be ideal option for 'auth.allow' to support subdir mount?

2017-07-20 Thread Niels de Vos
On Thu, Jul 20, 2017 at 07:11:29PM +0530, Amar Tumballi wrote:
> Hi,
> 
> I was working on subdir mount for fuse clients [1], and was able to handle
> pieces just fine in filesystem part of gluster. [2]
> 
> What is pending is, how will we handle the authentication options for this
> at each subdir level?
> 
> I propose to keep the current option and extending it to handle new feature
> with proper backward compatibility.
> 
> Currently, the option auth.allow (and auth.reject) are of the type
> GF_OPTION_TYPE_INTERNET_ADDRESS_LIST. Which expects valid internet
> addresses with comma separation.
> 
> For example the current option looks likes this:
> 
>  'option auth.addr.brick-name.allow *' OR 'option
> auth.addr.brick-name.allow "192.168.*.* ,10.10.*.*"'.
> 
> In future, it may look like:
> 
> `option auth.addr.brick-name.allow "10.0.1.13;192.168.1.*
> =/subdir1;192.168.10.* ,192.168.11.104 =/subdir2"`
> 
> so each entries will be separated by ';'. And in each entry, first part ("
> =") is address list and second part is directory. If directory is empty,
> its assumed as '/'. (Handles the backward compatibility). And if there is
> no entry for a $subdir here, that $subdir won't be mountable.

IIRC Gluster/NFS allows you to set permissions for subdir mounting with
a format like this:

  /subdir/next/dir(IP,IP-range,...) /subdir2(IP)

It would be best to use the existing format if we can to prevent
confusion among our users.

Thanks,
Niels


> 
> (The above format is handled properly already at [2] in addr.c, the pending
> thing is to handle the option properly in options.c's validate).
> 
> [1] - https://github.com/gluster/glusterfs/issues/175
> [2] - https://review.gluster.org/17141
> 
> If everyone agrees to this, I guess we can pull it off before absolute
> feature freeze date for 3.12 branch.
> 
> Let me know the feedback. (I am updating the same content in github, so
> feel free to comment there too).
> 
> NOTE: I thought of using ':' (colon) as field separator between addr_list
> and subdir entry, but with IPv6 ':' is valid character in string. Hence
> using ' ='.
> -- 
> Amar Tumballi (amarts)

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] What would be ideal option for 'auth.allow' to support subdir mount?

2017-07-20 Thread Amar Tumballi
Hi,

I was working on subdir mount for fuse clients [1], and was able to handle
pieces just fine in filesystem part of gluster. [2]

What is pending is, how will we handle the authentication options for this
at each subdir level?

I propose to keep the current option and extending it to handle new feature
with proper backward compatibility.

Currently, the option auth.allow (and auth.reject) are of the type
GF_OPTION_TYPE_INTERNET_ADDRESS_LIST. Which expects valid internet
addresses with comma separation.

For example the current option looks likes this:

 'option auth.addr.brick-name.allow *' OR 'option
auth.addr.brick-name.allow "192.168.*.* ,10.10.*.*"'.

In future, it may look like:

`option auth.addr.brick-name.allow "10.0.1.13;192.168.1.*
=/subdir1;192.168.10.* ,192.168.11.104 =/subdir2"`

so each entries will be separated by ';'. And in each entry, first part ("
=") is address list and second part is directory. If directory is empty,
its assumed as '/'. (Handles the backward compatibility). And if there is
no entry for a $subdir here, that $subdir won't be mountable.

(The above format is handled properly already at [2] in addr.c, the pending
thing is to handle the option properly in options.c's validate).

[1] - https://github.com/gluster/glusterfs/issues/175
[2] - https://review.gluster.org/17141

If everyone agrees to this, I guess we can pull it off before absolute
feature freeze date for 3.12 branch.

Let me know the feedback. (I am updating the same content in github, so
feel free to comment there too).

NOTE: I thought of using ':' (colon) as field separator between addr_list
and subdir entry, but with IPv6 ':' is valid character in string. Hence
using ' ='.
-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2017-07-20-8a09d780 (master branch)

2017-07-20 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-07-20-8a09d780
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Error while mounting gluster volume

2017-07-20 Thread Pranith Kumar Karampuri
The following generally means it is not able to connect to any of the
glusterds in the cluster.

[1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140
(Success)
[1970-01-02 10:54:04.420422] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1
[1970-01-02 10:54:04.420429] I [glusterfsd-mgmt.c:1824:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers


On Thu, Jul 20, 2017 at 4:01 PM, ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> While mounting the gluster volume using 'mount -t glusterfs' command it is
> getting failed.
>
> When we checked the log file getting the below logs
>
> [1970-01-02 10:54:04.420065] E [MSGID: 101187] 
> [event-epoll.c:391:event_register_epoll]
> 0-epoll: failed to add fd(=7) to epoll fd(=0) [Invalid argument]
> [1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed
> to register the event
> [1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140
> (Success)
> [1970-01-02 10:54:04.420422] I [MSGID: 101190] 
> [event-epoll.c:632:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [1970-01-02 10:54:04.420429] I [glusterfsd-mgmt.c:1824:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: Exhausted all volfile servers
> [1970-01-02 10:54:04.420480] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=2
> [1970-01-02 10:54:04.420511] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=3
> [1970-01-02 10:54:04.420534] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=4
> [1970-01-02 10:54:04.420556] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=5
> [1970-01-02 10:54:04.420566] W [glusterfsd.c:1238:cleanup_and_exit]
> (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0xb18c) [0x3fff8155e1e4]
> -->/usr/sbin/glusterfs() [0x100103a0] 
> -->/usr/sbin/glusterfs(cleanup_and_exit-0x1beac)
> [0x100097ac] ) 0-: received signum (1), shutting down
> [1970-01-02 10:54:04.420579] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=6
> [1970-01-02 10:54:04.420606] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=7
> [1970-01-02 10:54:04.420635] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=8
> [1970-01-02 10:54:04.420664] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=9
> [1970-01-02 10:54:04.420695] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=10
> [1970-01-02 10:54:04.420722] E [MSGID: 101063] 
> [event-epoll.c:550:event_dispatch_epoll_handler]
> 0-epoll: stale fd found on idx=0, gen=0, events=0, slot->gen=11
>
> for more logs we have attached the mount log file as well.
>
> Could you please help us in to identify the root cause?
>
> --
>
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel