Re: [Gluster-devel] [Gluster-Maintainers] Maintainers 2.0 Proposal

2017-03-18 Thread Vijay Bellur
On Sat, Mar 18, 2017 at 7:27 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Mar 16, 2017 at 7:42 AM, Vijay Bellur  wrote:
>
>> Hi All,
>>
>> We have been working on a proposal [1] to make the lifecycle management
>> of Gluster maintainers more structured. We intend to make the proposal
>> effective around 3.11 (May 2016).
>>
>> Please review the proposal and let us know your feedback. If you need
>> clarity on any existing aspect or feel the need for something additional in
>> the proposal, please feel free to let us know.
>>
>
>
>-
>
>It’s okay to drop a component if they are not able to find
>time/energy. Owners are requested to minimize disruption to the project by
>helping with transitions and announcing their intentions.
>
> How and to who should it be communicated that a owner/peer is doing a bad
> job and there are better alternatives for the component?
>
>
All feedback should be directed to the project and community leaders. At
this point in time, please direct any feedback (both positive and negative)
to Amye, Jeff and me.

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Maintainers 2.0 Proposal

2017-03-18 Thread Vijay Bellur
On Sat, Mar 18, 2017 at 7:17 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Sat, Mar 18, 2017 at 1:20 AM, Amar Tumballi 
> wrote:
>
>> I don't want to take the discussions in another direction, but want
>> clarity on few things:
>>
>> 1. Does maintainers means they are only reviewing/ merging patches?
>> 2. Should maintainers be responsible for answering ML / IRC questions
>> (well, they should focus more on documentation IMO).
>> 3. Who's responsibility is it to keep the gluster.org webpage? I
>> personally feel the responsibility should be well defined.
>> 4. Can a component have more than 1 owner ? 1 maintainers etc?
>>
>
> More than 1 maintainer is the best case we should aim for. I see EC
> benefit tremendously because of this. Work keeps moving because at least
> one of Xavi/I are available for discussions.
>


The number of maintainers will largely be a function of the complexity of
the component and the nature of ongoing work (features being designed,
patches for review, outstanding technical debt). The intent of this
exercise is not to disturb status quo for components where we do not have
problems.

Some of these questions should be clearly answered in document IMO.
>>
>
+1. We are evolving a project governance document and let us strive to
evolve as much clarity needed before making it effective.

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Why nodeid==1 need to be checked and dealt with specially in "fuse-bridge.c"?

2017-03-18 Thread Zhitao Li
Thanks for your kind reply.


In fact, I am in the worst case: all 1 files in the mount point without 
using sub-directories. About 436 times fuse_getattr(lookup) need to be called, 
and 3ms a time(md_cache always miss). That is why fuse_getattr(lookup) takes so 
high cost.


Could you tell more about the reason for which the "nodeid==1" need to be 
checked?  You said "The check makes sure that there is a LOOKUP done on the 
root of the volume ("/" always has inode "1",GFID 
"----0001"). "

Now I disable the special check, meaning that lookup operation will  be 
replaced by stat operation? Both will return correct  result to fuse-kernel? 
What if without the check?

It works normally now. Could you tell me some cases where it doesn't work?


Thank you!



Best regards,
Zhitao Li

Sent from Outlook

From: Niels de Vos 
Sent: Friday, March 17, 2017 6:12:35 PM
To: Zhitao Li
Cc: gluster-devel@gluster.org; Zhitao Li; 1318078...@qq.com
Subject: Re: [Gluster-devel] Why nodeid==1 need to be checked and dealt with 
specially in "fuse-bridge.c"?

On Thu, Mar 16, 2017 at 10:26:25AM +, Zhitao Li wrote:
> Hello, everyone,
>
>
> I have been trying to optimize "ls" performance for Glusterfs
> recently. My volume is disperse(48 bricks  with redundancy 16), and I
> mount it with fuse. I create 1 little files in mount point. Then I
> execute "ls" command. In my cluster, it takes about 3 seconds.
>
> I have a question about fuse_getattr function in "fuse-bridge.c" . Why
> need we check whether nodeid is equal to 1? , which means it is the
> mount point.  It is hard for me to get its meaning.
>
> (In my case, I find the operation of fuse_getattr takes neer half time
> for "ls", that is why I want to know what the check means. )
>
>
> [http://mail.163.com/js6/s?func=mbox:getMessageData&sid=iCMbFXtmOmngEsSJqWmmoeUvPJvoWUxS&mid=89:1tbiWQWbqlWBWYAhhQAAsE&part=3]
>
>
>
>
> I try to disable the special check, and then test "ls". It works
> normally and have a speedup 2x(about 1.3s without check). The reason
> is that in my case, "lookup" cost is much higher than "stat". Without
> the special check, getattr goes into "stat" instead of "lookup".
>
>
> Could you tell me the meaning of the special check for "nodeid == 1"?

I'm not sure of check in fuse_getattr would account for the
huge performance loss/win in your test. The check makes sure that there
is a LOOKUP done onf the root of the volume ("/" always has inode "1",
GFID "----0001"). In the majority of the
cases this should be skipped, unless you run your tests withour using a
sub-directory.

You can use 'gluster volume profile' to get some additional performance
statistics. For simple (single brick, or distribution only) volumes, you
can also use Wireshark (and "tshark -o srt,...") to see what goes over
the (slow) network.

HTH,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] About inode table: client, server and inconsistency

2017-03-18 Thread Tahereh Fattahi
Thank you very much.
Is it possible to change something in server inode table during a fop from
client? (I want to change the dht_layout of a directory when create a file
in that directory, but I dont know how send the changed layout to servers)

On Sat, Mar 18, 2017 at 6:36 PM, Amar Tumballi  wrote:

>
>
> On Thu, Mar 16, 2017 at 10:30 PM, Tahereh Fattahi 
> wrote:
>
>> Hi
>> Is it correct that each brick has one inode table for itself and each
>> client has one inode table that stores anything that is stored in bricks
>> inode table?
>>
>> For a given inode, the contents on client side and server side would be
> very much different between how the volume graph is.
>
>
>>
>> Does all inode tables store in RAM all the time?
>>
>
> Client (mainly fuse) inode table will be in memory all the time, until
> kernel sends a FORGET. Brick side we have limited number of inodes in
> memory. (There is an option called 'lru-limit').
>
>
>>
>>
>> When and how client's inode table update (how solve the inconsistency
>> problem between clinet and brick inode table that is because of rebalance
>> or other client fops) ?
>>
>>
> All the translators are designed to handle the consistency check in their
> 'lookup()' code, and they should send a response up with error saying its a
> stale inode (ESTALE), upon receiving which, the client inode table
> refreshes its inode, and does a fresh lookup again. This allows us to keep
> the inode table in consistency.
>
> Hope that answers the question.
>
> -Amar
>
>
>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amar Tumballi (amarts)
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] About inode table: client, server and inconsistency

2017-03-18 Thread Amar Tumballi
On Thu, Mar 16, 2017 at 10:30 PM, Tahereh Fattahi 
wrote:

> Hi
> Is it correct that each brick has one inode table for itself and each
> client has one inode table that stores anything that is stored in bricks
> inode table?
>
> For a given inode, the contents on client side and server side would be
very much different between how the volume graph is.


>
> Does all inode tables store in RAM all the time?
>

Client (mainly fuse) inode table will be in memory all the time, until
kernel sends a FORGET. Brick side we have limited number of inodes in
memory. (There is an option called 'lru-limit').


>
>
> When and how client's inode table update (how solve the inconsistency
> problem between clinet and brick inode table that is because of rebalance
> or other client fops) ?
>
>
All the translators are designed to handle the consistency check in their
'lookup()' code, and they should send a response up with error saying its a
stale inode (ESTALE), upon receiving which, the client inode table
refreshes its inode, and does a fresh lookup again. This allows us to keep
the inode table in consistency.

Hope that answers the question.

-Amar



> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] What does xdata mean? "gfid-req"?

2017-03-18 Thread Zhitao Li
Hello, everyone,


I am investigating  the difference between stat and lookup operations in 
GlusterFs now. In the translator named "md_cache", stat operation will hit the 
cache generally, while lookup operation will miss the cache.


The reason is that for lookup operation, md_cache will check whether the xdata 
is satisfied. In my case, lookup will include xdata "gfid-req" filled by 
fuse-bridge. However, in md_cache, this check never pass because the load flag 
of mdc_key "gfid-req"  is always 0.


Could anyone tell me why "gfid-req" is filled by fuse-bridge.c(fuse_getattr: 
nodeid==1->lookup)? What does it mean? And how xdata is used?

If no xdata, what would happen?

Thank you!


Best regards,
Zhitao Li

Sent from Outlook
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Maintainers 2.0 Proposal

2017-03-18 Thread Niels de Vos
On Sat, Mar 18, 2017 at 01:20:31AM +0530, Amar Tumballi wrote:
> I don't want to take the discussions in another direction, but want clarity
> on few things:
> 
> 1. Does maintainers means they are only reviewing/ merging patches?
> 2. Should maintainers be responsible for answering ML / IRC questions
> (well, they should focus more on documentation IMO).

IMHO maintainers should be *very* responsive to threads about
development and feature topics. Less for users. The majority of the
users topics should be addressed by non-maintainers that (hopefully)
have a little more time than most of the maintainers. The document
speaks about Owners+Peers though, so the intention may be a little
different.

> 3. Who's responsibility is it to keep the gluster.org webpage? I personally
> feel the responsibility should be well defined.

This is a different project in the Gluster Community... Just like other
projects there need to be someone (or a group) that is responsible for
keeping the different projcets in sync.

> 4. Can a component have more than 1 owner ? 1 maintainers etc?

In addition to that, it would be good to have a further explanation of
how the current structure fits in the new proposal.

Currently we have something like component maintainers, and feature
owners (often across components). I see the architects as the project
leads for the GlusterFS project who keep an eye on everything across the
Gluster Community and make sure things for users and related projects
match or/and are being worked on in the GlusterFS project.

> Some of these questions should be clearly answered in document IMO.

Yes, definitely!

Cheers,
Niels


> 
> Regards,
> Amar
> 
> 
> On Fri, Mar 17, 2017 at 11:55 PM, Amye Scavarda  wrote:
> 
> > Posting in line, but it may be pretty hard to follow.
> > Apologies if I miss something.
> >
> > On Fri, Mar 17, 2017 at 11:06 AM, Niels de Vos  wrote:
> >
> >> On Wed, Mar 15, 2017 at 10:12:18PM -0400, Vijay Bellur wrote:
> >> > Hi All,
> >> >
> >> > We have been working on a proposal [1] to make the lifecycle management
> >> of
> >> > Gluster maintainers more structured. We intend to make the proposal
> >> > effective around 3.11 (May 2016).
> >> >
> >> > Please review the proposal and let us know your feedback. If you need
> >> > clarity on any existing aspect or feel the need for something
> >> additional in
> >> > the proposal, please feel free to let us know.
> >>
> >> I'll just include the proposal here and add inline comments. I'm not
> >> sure if this is the best way, or if you would like anyone edit the
> >> document directly...
> >>
> >> > Thanks!
> >> > Amye & Vijay
> >> >
> >> > [1]  https://hackmd.io/s/SkwiZd4qe
> >>
> >>
> >>
> >> > # Revised Maintainers for 3.11
> >> >
> >> > AI from Community Meeting, March 1:
> >> > amye to work on revised maintainers draft with vbellur to get out for
> >> > next maintainer's meeting. We'll approve it 'formally' there, see how it
> >> > works for 3.11.
> >>
> >> The next maintainers meeting happens when many maintainers are at VAULT.
> >> I would not expect a large attendance at that time. Also, Amye sent an
> >> email with a different target date?
> >>
> >
> > Feedback target date of 30 days, that's what I was indicating. This was
> > reviewed in the maintainers' meeting on March 8 and we're now expanding out
> > to the larger group.
> >
> >>
> >> > ## Goals:
> >> > * Refine how we declare component owners in Gluster
> >> > * Create a deeper sense of ownership throughout the open source project
> >> > * Welcome more contibutors at a project impacting level
> >>
> >> It would be good to make a distinction between the Gluster Community and
> >> the GlusterFS Project. "Gluster" refers in my understanding to all the
> >> projects of the Gluster Community. This document looks most aimed at the
> >> GlusterFS project, with some Gluster Community references.
> >
> >
> >
> > Is this distinction relevant? We're talking about how we define a
> > maintainer for contributing to the Gluster community overall. As I work
> > through this, I see your confusion. I don't think that we'd be able to make
> > this call for 'all related projects', but for committing back into Gluster
> > proper, yes.
> >
> >
> >>
> > > ## Definition of Owners + Peers
> >> > * Commit access to the project is not dependent on being a maintainer. A
> >> >   maintainer is a leadership role within the project to help drive the
> >> >   project forward.
> >>
> >> "the project", is that the glusterfs git repository, or any repository
> >> that we maintain? How would we see this for projects that contain
> >> Gluster modules like NFS-Ganesha and Samba? Or are those not considered
> >> as "our" components?
> >
> >
> > I think initially, we'd want to limit this to just the Gluster project.
> > Too much expansion and we'll have too much change too quickly.
> >
> >
> >>
> > > * Owner - Subject matter expert, help design large feature changes and
> >> >   decide overall goal of the component. R

Re: [Gluster-devel] [Gluster-Maintainers] Maintainers 2.0 Proposal

2017-03-18 Thread Pranith Kumar Karampuri
On Thu, Mar 16, 2017 at 7:42 AM, Vijay Bellur  wrote:

> Hi All,
>
> We have been working on a proposal [1] to make the lifecycle management of
> Gluster maintainers more structured. We intend to make the proposal
> effective around 3.11 (May 2016).
>
> Please review the proposal and let us know your feedback. If you need
> clarity on any existing aspect or feel the need for something additional in
> the proposal, please feel free to let us know.
>


   -

   It’s okay to drop a component if they are not able to find time/energy.
   Owners are requested to minimize disruption to the project by helping with
   transitions and announcing their intentions.

How and to who should it be communicated that a owner/peer is doing a bad
job and there are better alternatives for the component?


> Thanks!
> Amye & Vijay
>
> [1]  https://hackmd.io/s/SkwiZd4qe
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Maintainers 2.0 Proposal

2017-03-18 Thread Pranith Kumar Karampuri
On Sat, Mar 18, 2017 at 4:47 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Sat, Mar 18, 2017 at 1:20 AM, Amar Tumballi 
> wrote:
>
>> I don't want to take the discussions in another direction, but want
>> clarity on few things:
>>
>> 1. Does maintainers means they are only reviewing/ merging patches?
>> 2. Should maintainers be responsible for answering ML / IRC questions
>> (well, they should focus more on documentation IMO).
>> 3. Who's responsibility is it to keep the gluster.org webpage? I
>> personally feel the responsibility should be well defined.
>> 4. Can a component have more than 1 owner ? 1 maintainers etc?
>>
>
> More than 1 maintainer is the best case we should aim for. I see EC
> benefit tremendously because of this. Work keeps moving because at least
> one of Xavi/I are available for discussions.
>

If for some reason we decide we should have only one maintainer, I would
like to be peer for EC and Xavi should be maintainer.


>
>
>>
>> Some of these questions should be clearly answered in document IMO.
>>
>> Regards,
>> Amar
>>
>>
>> On Fri, Mar 17, 2017 at 11:55 PM, Amye Scavarda  wrote:
>>
>>> Posting in line, but it may be pretty hard to follow.
>>> Apologies if I miss something.
>>>
>>> On Fri, Mar 17, 2017 at 11:06 AM, Niels de Vos 
>>> wrote:
>>>
 On Wed, Mar 15, 2017 at 10:12:18PM -0400, Vijay Bellur wrote:
 > Hi All,
 >
 > We have been working on a proposal [1] to make the lifecycle
 management of
 > Gluster maintainers more structured. We intend to make the proposal
 > effective around 3.11 (May 2016).
 >
 > Please review the proposal and let us know your feedback. If you need
 > clarity on any existing aspect or feel the need for something
 additional in
 > the proposal, please feel free to let us know.

 I'll just include the proposal here and add inline comments. I'm not
 sure if this is the best way, or if you would like anyone edit the
 document directly...

 > Thanks!
 > Amye & Vijay
 >
 > [1]  https://hackmd.io/s/SkwiZd4qe



 > # Revised Maintainers for 3.11
 >
 > AI from Community Meeting, March 1:
 > amye to work on revised maintainers draft with vbellur to get out for
 > next maintainer's meeting. We'll approve it 'formally' there, see how
 it
 > works for 3.11.

 The next maintainers meeting happens when many maintainers are at VAULT.
 I would not expect a large attendance at that time. Also, Amye sent an
 email with a different target date?

>>>
>>> Feedback target date of 30 days, that's what I was indicating. This was
>>> reviewed in the maintainers' meeting on March 8 and we're now expanding out
>>> to the larger group.
>>>

 > ## Goals:
 > * Refine how we declare component owners in Gluster
 > * Create a deeper sense of ownership throughout the open source
 project
 > * Welcome more contibutors at a project impacting level

 It would be good to make a distinction between the Gluster Community and
 the GlusterFS Project. "Gluster" refers in my understanding to all the
 projects of the Gluster Community. This document looks most aimed at the
 GlusterFS project, with some Gluster Community references.
>>>
>>>
>>>
>>> Is this distinction relevant? We're talking about how we define a
>>> maintainer for contributing to the Gluster community overall. As I work
>>> through this, I see your confusion. I don't think that we'd be able to make
>>> this call for 'all related projects', but for committing back into Gluster
>>> proper, yes.
>>>
>>>

>>> > ## Definition of Owners + Peers
 > * Commit access to the project is not dependent on being a
 maintainer. A
 >   maintainer is a leadership role within the project to help drive the
 >   project forward.

 "the project", is that the glusterfs git repository, or any repository
 that we maintain? How would we see this for projects that contain
 Gluster modules like NFS-Ganesha and Samba? Or are those not considered
 as "our" components?
>>>
>>>
>>> I think initially, we'd want to limit this to just the Gluster project.
>>> Too much expansion and we'll have too much change too quickly.
>>>
>>>

>>> > * Owner - Subject matter expert, help design large feature changes and
 >   decide overall goal of the component. Reviews patches, approves
 >   changes. Responsible for recruiting assisting peers. Owner of
 >   component. (Principle Software Engineer - unconnected to actual role
 >   in Red Hat organization)

 I would say a "subject matter expert" can give a +2 code-review in
 Gerrit, and the "owner" of the component would honour that opinion. I
 fail to see what "Principle Software Engineer" has to do with this if it
 is not connected to a role at Red Hat (hmm, I need to talk to my boss?).


>>> I've gotten feedback that we should revisit t

Re: [Gluster-devel] Maintainers 2.0 Proposal

2017-03-18 Thread Pranith Kumar Karampuri
On Sat, Mar 18, 2017 at 1:20 AM, Amar Tumballi  wrote:

> I don't want to take the discussions in another direction, but want
> clarity on few things:
>
> 1. Does maintainers means they are only reviewing/ merging patches?
> 2. Should maintainers be responsible for answering ML / IRC questions
> (well, they should focus more on documentation IMO).
> 3. Who's responsibility is it to keep the gluster.org webpage? I
> personally feel the responsibility should be well defined.
> 4. Can a component have more than 1 owner ? 1 maintainers etc?
>

More than 1 maintainer is the best case we should aim for. I see EC benefit
tremendously because of this. Work keeps moving because at least one of
Xavi/I are available for discussions.


>
> Some of these questions should be clearly answered in document IMO.
>
> Regards,
> Amar
>
>
> On Fri, Mar 17, 2017 at 11:55 PM, Amye Scavarda  wrote:
>
>> Posting in line, but it may be pretty hard to follow.
>> Apologies if I miss something.
>>
>> On Fri, Mar 17, 2017 at 11:06 AM, Niels de Vos  wrote:
>>
>>> On Wed, Mar 15, 2017 at 10:12:18PM -0400, Vijay Bellur wrote:
>>> > Hi All,
>>> >
>>> > We have been working on a proposal [1] to make the lifecycle
>>> management of
>>> > Gluster maintainers more structured. We intend to make the proposal
>>> > effective around 3.11 (May 2016).
>>> >
>>> > Please review the proposal and let us know your feedback. If you need
>>> > clarity on any existing aspect or feel the need for something
>>> additional in
>>> > the proposal, please feel free to let us know.
>>>
>>> I'll just include the proposal here and add inline comments. I'm not
>>> sure if this is the best way, or if you would like anyone edit the
>>> document directly...
>>>
>>> > Thanks!
>>> > Amye & Vijay
>>> >
>>> > [1]  https://hackmd.io/s/SkwiZd4qe
>>>
>>>
>>>
>>> > # Revised Maintainers for 3.11
>>> >
>>> > AI from Community Meeting, March 1:
>>> > amye to work on revised maintainers draft with vbellur to get out for
>>> > next maintainer's meeting. We'll approve it 'formally' there, see how
>>> it
>>> > works for 3.11.
>>>
>>> The next maintainers meeting happens when many maintainers are at VAULT.
>>> I would not expect a large attendance at that time. Also, Amye sent an
>>> email with a different target date?
>>>
>>
>> Feedback target date of 30 days, that's what I was indicating. This was
>> reviewed in the maintainers' meeting on March 8 and we're now expanding out
>> to the larger group.
>>
>>>
>>> > ## Goals:
>>> > * Refine how we declare component owners in Gluster
>>> > * Create a deeper sense of ownership throughout the open source project
>>> > * Welcome more contibutors at a project impacting level
>>>
>>> It would be good to make a distinction between the Gluster Community and
>>> the GlusterFS Project. "Gluster" refers in my understanding to all the
>>> projects of the Gluster Community. This document looks most aimed at the
>>> GlusterFS project, with some Gluster Community references.
>>
>>
>>
>> Is this distinction relevant? We're talking about how we define a
>> maintainer for contributing to the Gluster community overall. As I work
>> through this, I see your confusion. I don't think that we'd be able to make
>> this call for 'all related projects', but for committing back into Gluster
>> proper, yes.
>>
>>
>>>
>> > ## Definition of Owners + Peers
>>> > * Commit access to the project is not dependent on being a maintainer.
>>> A
>>> >   maintainer is a leadership role within the project to help drive the
>>> >   project forward.
>>>
>>> "the project", is that the glusterfs git repository, or any repository
>>> that we maintain? How would we see this for projects that contain
>>> Gluster modules like NFS-Ganesha and Samba? Or are those not considered
>>> as "our" components?
>>
>>
>> I think initially, we'd want to limit this to just the Gluster project.
>> Too much expansion and we'll have too much change too quickly.
>>
>>
>>>
>> > * Owner - Subject matter expert, help design large feature changes and
>>> >   decide overall goal of the component. Reviews patches, approves
>>> >   changes. Responsible for recruiting assisting peers. Owner of
>>> >   component. (Principle Software Engineer - unconnected to actual role
>>> >   in Red Hat organization)
>>>
>>> I would say a "subject matter expert" can give a +2 code-review in
>>> Gerrit, and the "owner" of the component would honour that opinion. I
>>> fail to see what "Principle Software Engineer" has to do with this if it
>>> is not connected to a role at Red Hat (hmm, I need to talk to my boss?).
>>>
>>>
>> I've gotten feedback that we should revisit the 'Principal' vs 'Senior'
>> framing - apologies. It was not the intention to make it Red Hat centric in
>> this way, but it was shorthand for responsibility areas. I'm happy to
>> revisit.
>>
>>
>>> > * Peer - assists with design, reviews. Growing into subject matter
>>> >   expert, but not required to be engaged in the overall design of the
>>> >   component

[Gluster-devel] [RFC] fop usage metrics across the filesystem

2017-03-18 Thread Amar Tumballi
I have opened a github issues about this @
https://github.com/gluster/glusterfs/issues/137

Would recommend people to comment on the thread there so we can have a
better conversation on the topic, and also keep track of whats happening.

Regards,
Amar

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel