Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Pavel Szalbot
Hi everybody,

I suppose there will be a lot more people affected by removal of the
driver from Cinder who do not know about it. I am running production
clusters on Mitaka and Newton and did not know about the issue -
Openstack is quite a beast to keep pace with the updates.

Are there any news from dev team, Red Hat or Amye's radar?

-ps


On Sun, May 28, 2017 at 9:20 AM, Niels de Vos  wrote:
> On Sat, May 27, 2017 at 08:48:00PM -0400, Vijay Bellur wrote:
>> On Sat, May 27, 2017 at 3:02 AM, Joe Julian  wrote:
>>
>> > On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
>> >
>> >
>> >
>> > On Wed, May 24, 2017 at 9:10 PM, Joe Julian  wrote:
>> >
>> >> Forwarded for posterity and follow-up.
>> >>
>> >>  Forwarded Message 
>> >> Subject: Re: GlusterFS removal from Openstack Cinder
>> >> Date: Fri, 05 May 2017 21:07:27 +
>> >> From: Amye Scavarda  
>> >> To: Eric Harney  , Joe Julian
>> >>  , Vijay Bellur
>> >>  
>> >> CC: Amye Scavarda  
>> >>
>> >> Eric,
>> >> I'm sorry to hear this.
>> >> I'm reaching out internally (within Gluster CI team and CentOS CI which
>> >> supports Gluster) to get an idea of the level of effort we'll need to
>> >> provide to resolve this.
>> >> It'll take me a few days to get this, but this is on my radar. In the
>> >> meantime, is there somewhere I should be looking at for requirements to
>> >> meet this gateway?
>> >>
>> >> Thanks!
>> >> -- amye
>> >>
>> >> On Fri, May 5, 2017 at 16:09 Joe Julian  wrote:
>> >>
>> >>> On 05/05/2017 12:54 PM, Eric Harney wrote:
>> >>> >> On 04/28/2017 12:41 PM, Joe Julian wrote:
>> >>> >>> I learned, today, that GlusterFS was deprecated and removed from
>> >>> >>> Cinder as one of our #gluster (freenode) users was attempting to
>> >>> >>> upgrade openstack. I could find no rational nor discussion of that
>> >>> >>> removal. Could you please educate me about that decision?
>> >>> >>>
>> >>> >
>> >>> > Hi Joe,
>> >>> >
>> >>> > I can fill in on the rationale here.
>> >>> >
>> >>> > Keeping a driver in the Cinder tree requires running a CI platform to
>> >>> > test that driver and report results against all patchsets submitted to
>> >>> > Cinder.  This is a fairly large burden, which we could not meet once
>> >>> the
>> >>> > Gluster Cinder driver was no longer an active development target at
>> >>> Red Hat.
>> >>> >
>> >>> > This was communicated via a warning issued by the driver for anyone
>> >>> > running the OpenStack Newton code, and via the Cinder release notes for
>> >>> > the Ocata release.  (I can see in retrospect that this was probably not
>> >>> > communicated widely enough.)
>> >>> >
>> >>> > I apologize for not reaching out to the Gluster community about this.
>> >>> >
>> >>> > If someone from the Gluster world is interested in bringing this driver
>> >>> > back, I can help coordinate there.  But it will require someone
>> >>> stepping
>> >>> > in in a big way to maintain it.
>> >>> >
>> >>> > Thanks,
>> >>> > Eric
>> >>>
>> >>> Ah, Red Hat's statement that the acquisition of InkTank was not an
>> >>> abandonment of Gluster seems rather disingenuous now. I'm disappointed.
>> >>>
>> >>
>> > I am a Red Hat employee working on gluster and I am happy with the kind of
>> > investments the company did in GlusterFS. Still am. It is a pretty good
>> > company and really open. I never had any trouble saying something the
>> > management did is wrong when I strongly felt and they would give a decent
>> > reason for their decision.
>> >
>> >
>> > Happy to hear that. Still looks like meddling to an outsider. Not the
>> > Gluster team's fault though (although more participation of the developers
>> > in community meetings would probably help with that feeling of being
>> > disconnected, in my own personal opinion).
>> >
>> >
>> >
>> >>
>> >>> Would you please start a thread on the gluster-users and gluster-devel
>> >>> mailing lists and see if there's anyone willing to take ownership of
>> >>> this. I'm certainly willing to participate as well but my $dayjob has
>> >>> gone more kubernetes than openstack so I have only my limited free time
>> >>> that I can donate.
>> >>>
>> >>
>> > Do we know what would maintaining cinder as active entail? Did Eric get
>> > back to any of you?
>> >
>> >
>> > Haven't heard anything more, no.
>> >
>> >
>>  Policies for maintaining an active driver in cinder can be found at [1]
>> and [2]. We will need some work to let the driver be active again (after a
>> revert of the commit that removed the driver from cinder) and provide CI
>> support as entailed in [2].
>>
>> I will co-ordinate further internal discussions within Red Hat on this
>> topic and provide an update soon on how we can proceed here.
>
> The hole discussion is about integration of Gluster in . I
> would really appreciate to see the discussion in the public, on the
> integrat...@gluster.org list. This is a low-traffic list, specially
> dedicated to these kind of topics. Having the discussion in archives
> will surely help to track t

Re: [Gluster-devel] [Proposal]: New branch (earlier: Changes to how we test and vote each patch)

2017-05-30 Thread Amar Tumballi
All,

Here is another proposal, which I would like to start immediately to enable
many features possible for Gluster 4.0 release.

We will create a 'new' branch called '*experimental*', where anyone can
post a patch, for which only jenkins job which runs is 'smoke' (build +
fops validation).


   - This is where one can push any conceptual changes for review and get
   it merged quickly.
  - To make sure we don't delay, after every 1 week if there are no
  merges, a patch would be merged. (If there are no -1).
  - Manually, it can be merged even before.
   - Review comments on this branch is absolutely about concept and design,
   and no nitpicks.
   - even ./rfc.sh would be changed to allow (with a question) even in case
   of Errors.
   - All fundamental changes (like file type in GFID, DHT layout changes,
   etc) gets merged here first, and once baked can be taken as a single patch
   and get merged in master, which would make it to next release branch cut.
   - This branch would have life for 6 months, and after 6 months a new
   branch would be cut-off from master, so any features which can't be
   stabilized in 6 months, needs to be rebasing at least once in 6 months to
   new branch. This way, it would be much easier for everyone to experiment
   their ideas.
   - As this branch doesn't need a bugzilla ID to work, having github
   issues (for an RFE) would be important. This can fetch -1.
   - I am willing to maintain this branch to make sure it is in all its
   sanity, and keep track of patches which gets in.

Let me know what each of you would think.

On Wed, May 24, 2017 at 2:13 PM, Amar Tumballi  wrote:

> All,
>
> Below is mainly a proposal, and I would like to hear people's thought on
> these.
>
>
>- Over the years, we have added many test cases to our regression test
>suite, and currently the testing time stands at ~5hrs for 1 patch. But the
>current way of '*.t' based regression can't be called as a 'complete' test
>for the filesystem.
>
>
>- Correct me if I am wrong, but even when we have to make a release,
>other than the same set of tests, we don't have much to validate the build.
>   - Pranith / Aravinda, I heard there was some effort on these lines,
>   and you guys prepared a list during 3.9 cycle, share them if relevant.
>
> Now considering above points and taking the proposal of 'Good Build' from
> Nigel
> 
> [1],  I am thinking of making below changes to how we look at testing and
> stability.
>
> *'What to test on nightly build':*
>
>- Build verification
>- Run all the regression as it runs now.
>   - Run CentOS regression
>   - Run NetBSD regression
>   - Run coverity
>- Run gcov/lcov (for coverage)
>- Run more tests with currently optional options made as default (like
>brick-multiplexing etc).
>- Open up the infra to community contribution, so anyone can write
>test cases to make sure GlusterFS passes their usecases, everynight.
>   - Should be possible to run a python script, ruby script, or a bash
>   script, need not be in a 'prove' like setup.
>
>
> *'master' branch:*
>
>- make the overall regression lightweight.
>   - Run what netbsd tests run now in CentOS regression (ie, basic and
>   features in tests).
>   - Don't run netbsd builds, instead add a compilation test on centos
>   32bit machine to keep reminding ourself how many warnings we get.
>- Make sure 'master' branch is well tested in 'Nightly'.
>- Let the approach of maintainers and over all project is to promote
>new changes, instead of being very sceptical about new patches, ideas.
>- Provide option to run the whole nightly build suit with a given
>patchset to maintainers, so when in doubt, they can ask for the build to
>complete before merging. Mostly applies to new feature or some changes
>which change the way things behave fundamentally.
>
> *'release-x.y' branch:*
>
>- During the release-plan come out with target number of 'coverity'
>issues, and line coverage % to meet. Also consider number of 'glusto-tests'
>to pass.
>- Agree to branch out early (at least 45days, compared to current
>30days), so we can iron-out the issues caused by the making the 'master'
>branch process lean.
>
>
>- Change the voting logic, add more tests now (Example: fall back to
>current regression suit).
>- On the first build, run agreed performance tests and compare the
>numbers with previous versions.
>- Run NetBSD regression now.
>   - Btw, noticed the latest NetBSD package is for 3.8.9 (done in Jan).
>   - Work with Emmanuel   for better options here.
>- Run nightly on the branch.
>- Block the release till we meet the initially agreed metrics during
>the release-plan. (like coverity/glusto-tests, line coverage etc).
>   - For 3.12 release itself we can fix 

Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-30 Thread Zhang Huan
> On 30 May 2017, at 19:58, Raghavendra Gowdappa  wrote:
> 
> 
> 
> - Original Message -
>> From: "Zhang Huan" mailto:zhangh...@open-fs.com>>
>> To: "Raghavendra G" > >
>> Cc: "GlusterFS Maintainers" > >, "Gluster Devel" 
>> mailto:gluster-devel@gluster.org>>, "Kaushal 
>> Madappa"
>> mailto:kmada...@redhat.com>>
>> Sent: Tuesday, May 30, 2017 3:33:09 PM
>> Subject: Re: [Gluster-Maintainers] [Gluster-devel] Backport for "Add back 
>> socket for polling of events
>> immediately..."
>> 
>> 
>> 
>> 
>> On 29 May 2017, at 11:16, Raghavendra G < raghaven...@gluster.com > wrote:
>> 
>> Replying to all queries here:
>> 
>> * Is it a bug or performance enhancement?
>> Its a performance enhancement. No functionality is broken if this patch is
>> not taken in.
>> 
>> * Are there performance numbers to validate the claim?
>> https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c9
>> 
>> * Are there any existing users who need this enhancement?
>> https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27
>> 
>> Though not sure what branch Zhang Huan is on. @Zhang your inputs are needed
>> here.
>> 
>> We are currently on 3.8. Thus the performance number is based on 3.8.
>> If you need more details, please let me know.
> 
> Thanks Zhang. The question was more on the lines whether you need backport of 
> the fix to 3.8.

Actually, we really need this backported to 3.8. I have seen the backport of it 
to 3.8.
https://review.gluster.org/#/c/15046/ 
Once it gets merged, we will rebase to it and test it as a whole.

> Can you upgrade to recent releases (say 3.11.x or 3.10.x)?

Sorry, I am afraid not. Gusterfs is one of the key components in our product. 
An upgrade alone would break the whole thing. 


> 
>> 
>> 
>> 
>> 
>> 
>> * Do I think this patch _should_ go into any of the released branches?
>> Personally, I don't feel strongly either way. I am fine with this patch not
>> making into any of released branches. But, I do think there are users who
>> are affected with this (Especially EC/Disperse configurations). If they want
>> to stick to the released branches, pulling into released branches will help
>> them. @Pranith/Xavi, what are your opinions on this?
>> 
>> regards,
>> Raghavendra
>> 
>> On Sun, May 28, 2017 at 6:58 PM, Shyam < srang...@redhat.com > wrote:
>> 
>> 
>> On 05/28/2017 09:24 AM, Atin Mukherjee wrote:
>> 
>> 
>> 
>> 
>> On Sun, May 28, 2017 at 1:48 PM, Niels de Vos < nde...@redhat.com
>> > wrote:
>> 
>> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
>>> Or this one: https://review.gluster.org/15036 <
>>> https://review.gluster.org/15036 >
>>> 
>>> This is backported to 3.8/10 and 3.11 and considering the size and impact
>>> of
>>> the change, I wanted to be sure that we are going to accept this across all
>>> 3 releases?
>>> 
>>> @Du, would like your thoughts on this.
>>> 
>>> @niels, @kaushal, @talur, as release owners, could you weigh in as well
>>> please.
>>> 
>>> I am thinking that we get this into 3.11.1 if there is agreement, and not
>>> in
>>> 3.11.0 as we are finalizing the release in 3 days, and this change looks
>>> big, to get in at this time.
>> 
>> 
>> Given 3.11 is going to be a new release, I'd recommend to get this fix
>> in (if we have time). https://review.gluster.org/#/c/17402/ is dependent
>> on this one.
>> 
>> It is not a fix Atin, it is a more fundamental change to request processing,
>> with 2 days to the release, you want me to merge this?
>> 
>> Is there a *bug* that will surface without this change or is it a performance
>> enhancement?
>> 
>> 
>> 
>> 
>>> 
>>> Further the change is actually an enhancement, and provides performance
>>> benefits, so it is valid as a change itself, but I feel it is too late to
>>> add to the current 3.11 release.
>> 
>> Indeed, and mostly we do not merge enhancements that are non-trivial to
>> stable branches. Each change that we backport introduces the chance on
>> regressions for users with their unknown (and possibly awkward)
>> workloads.
>> 
>> The patch itself looks ok, but it is difficult to predict how the change
>> affects current deployments. I prefer to be conservative and not have
>> this merged in 3.8, at least for now. Are there any statistics in how
>> performance is affected with this change? Having features like this only
>> in newer versions might also convince users to upgrade sooner, 3.8 will
>> only be supported until 3.12 (or 4.0) gets released, which is approx. 3
>> months from now according to our schedule.
>> 
>> Niels
>> 
>> ___
>> maintainers mailing list
>> maintain...@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers
>> < http://lists.gluster.org/mailman/listinfo/maintainers >
>> 
>> 
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/g

Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian

On 05/30/2017 03:52 PM, Ric Wheeler wrote:

On 05/30/2017 06:37 PM, Joe Julian wrote:

On 05/30/2017 03:24 PM, Ric Wheeler wrote:

On 05/27/2017 03:02 AM, Joe Julian wrote:

On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:



On Wed, May 24, 2017 at 9:10 PM, Joe Julian > wrote:


Forwarded for posterity and follow-up.


 Forwarded Message 
Subject: Re: GlusterFS removal from Openstack Cinder
Date: Fri, 05 May 2017 21:07:27 +
From: Amye Scavarda  

To: Eric Harney  
, Joe
Julian  , Vijay 
Bellur

 
CC: Amye Scavarda  



Eric,
I'm sorry to hear this.
I'm reaching out internally (within Gluster CI team and CentOS 
CI which
supports Gluster) to get an idea of the level of effort we'll 
need to

provide to resolve this.
It'll take me a few days to get this, but this is on my radar. 
In the
meantime, is there somewhere I should be looking at for 
requirements to

meet this gateway?

Thanks!
-- amye

On Fri, May 5, 2017 at 16:09 Joe Julian mailto:m...@joejulian.name>> wrote:

On 05/05/2017 12:54 PM, Eric Harney wrote:
>> On 04/28/2017 12:41 PM, Joe Julian wrote:
>>> I learned, today, that GlusterFS was deprecated and 
removed from
>>> Cinder as one of our #gluster (freenode) users was 
attempting to
>>> upgrade openstack. I could find no rational nor 
discussion of that

>>> removal. Could you please educate me about that decision?
>>>
>
> Hi Joe,
>
> I can fill in on the rationale here.
>
> Keeping a driver in the Cinder tree requires running a 
CI platform to
> test that driver and report results against all 
patchsets submitted to
> Cinder.  This is a fairly large burden, which we could 
not meet

once the
> Gluster Cinder driver was no longer an active 
development target at

Red Hat.
>
> This was communicated via a warning issued by the driver 
for anyone
> running the OpenStack Newton code, and via the Cinder 
release notes for
> the Ocata release.  (I can see in retrospect that this 
was probably not

> communicated widely enough.)
>
> I apologize for not reaching out to the Gluster 
community about this.

>
> If someone from the Gluster world is interested in 
bringing this driver
> back, I can help coordinate there.  But it will require 
someone

stepping
> in in a big way to maintain it.
>
> Thanks,
> Eric

Ah, Red Hat's statement that the acquisition of InkTank 
was not an
abandonment of Gluster seems rather disingenuous now. I'm 
disappointed.



I am a Red Hat employee working on gluster and I am happy with the 
kind of investments the company did in GlusterFS. Still am. It is 
a pretty good company and really open. I never had any trouble 
saying something the management did is wrong when I strongly felt 
and they would give a decent reason for their decision.


Happy to hear that. Still looks like meddling to an outsider. Not 
the Gluster team's fault though (although more participation of the 
developers in community meetings would probably help with that 
feeling of being disconnected, in my own personal opinion).


As a community, each member needs to make sure that their specific 
use case has the resources it needs to flourish. If some team cares 
about Gluster in openstack, they should step forward and provide the 
engineering and hardware resources needed to make it succeed.


Red Hat has and continues to pour resources into Gluster - Gluster 
is thriving. We have loads of work going on with gluster in RHEV, 
Kubernetes, NFS Ganesha and Samba.


What we are not doing and that has been clear for many years now is 
to invest in Gluster in openstack.


Again, nobody communicated with either the Openstack nor the Gluster 
communities about this, short of deprecation warnings which are not 
the most effective way of reaching people (that may be wrong on the 
part of most users, but unfortunately it's a reality). Red Hat wasn't 
interested in investing in Gluster on Openstack anymore. That's fine. 
It's your money. As a community leader, proponent, and champion, 
however, Red Hat should have at least invested in finding an 
interested party to take over the effort - imho.


I think it is 100% disingenuous to position this as a surprise 
withdrawal of Gluster from Red Hat from openstack. The position we 
have had with what we have focused on with Gluster has been 
exceedingly clear for years.


I am completely sincere. I do not posture or pose. I have absolutely no 
reason to do so. I am not financially connected to gluster in any way. 
The 

Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Ric Wheeler

On 05/30/2017 06:37 PM, Joe Julian wrote:

On 05/30/2017 03:24 PM, Ric Wheeler wrote:

On 05/27/2017 03:02 AM, Joe Julian wrote:

On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:



On Wed, May 24, 2017 at 9:10 PM, Joe Julian > wrote:


Forwarded for posterity and follow-up.


 Forwarded Message 
Subject: Re: GlusterFS removal from Openstack Cinder
Date: Fri, 05 May 2017 21:07:27 +
From: Amye Scavarda  
To: Eric Harney  , Joe
Julian  , Vijay Bellur
 
CC: Amye Scavarda  



Eric,
I'm sorry to hear this.
I'm reaching out internally (within Gluster CI team and CentOS CI which
supports Gluster) to get an idea of the level of effort we'll need to
provide to resolve this.
It'll take me a few days to get this, but this is on my radar. In the
meantime, is there somewhere I should be looking at for requirements to
meet this gateway?

Thanks!
-- amye

On Fri, May 5, 2017 at 16:09 Joe Julian mailto:m...@joejulian.name>> wrote:

On 05/05/2017 12:54 PM, Eric Harney wrote:
>> On 04/28/2017 12:41 PM, Joe Julian wrote:
>>> I learned, today, that GlusterFS was deprecated and removed from
>>> Cinder as one of our #gluster (freenode) users was attempting to
>>> upgrade openstack. I could find no rational nor discussion of that
>>> removal. Could you please educate me about that decision?
>>>
>
> Hi Joe,
>
> I can fill in on the rationale here.
>
> Keeping a driver in the Cinder tree requires running a CI 
platform to
> test that driver and report results against all patchsets 
submitted to

> Cinder.  This is a fairly large burden, which we could not meet
once the
> Gluster Cinder driver was no longer an active development target at
Red Hat.
>
> This was communicated via a warning issued by the driver for anyone
> running the OpenStack Newton code, and via the Cinder release 
notes for
> the Ocata release.  (I can see in retrospect that this was 
probably not

> communicated widely enough.)
>
> I apologize for not reaching out to the Gluster community about 
this.

>
> If someone from the Gluster world is interested in bringing this 
driver

> back, I can help coordinate there.  But it will require someone
stepping
> in in a big way to maintain it.
>
> Thanks,
> Eric

Ah, Red Hat's statement that the acquisition of InkTank was not an
abandonment of Gluster seems rather disingenuous now. I'm 
disappointed.



I am a Red Hat employee working on gluster and I am happy with the kind of 
investments the company did in GlusterFS. Still am. It is a pretty good 
company and really open. I never had any trouble saying something the 
management did is wrong when I strongly felt and they would give a decent 
reason for their decision.


Happy to hear that. Still looks like meddling to an outsider. Not the 
Gluster team's fault though (although more participation of the developers 
in community meetings would probably help with that feeling of being 
disconnected, in my own personal opinion).


As a community, each member needs to make sure that their specific use case 
has the resources it needs to flourish. If some team cares about Gluster in 
openstack, they should step forward and provide the engineering and hardware 
resources needed to make it succeed.


Red Hat has and continues to pour resources into Gluster - Gluster is 
thriving. We have loads of work going on with gluster in RHEV, Kubernetes, 
NFS Ganesha and Samba.


What we are not doing and that has been clear for many years now is to invest 
in Gluster in openstack.


Again, nobody communicated with either the Openstack nor the Gluster 
communities about this, short of deprecation warnings which are not the most 
effective way of reaching people (that may be wrong on the part of most users, 
but unfortunately it's a reality). Red Hat wasn't interested in investing in 
Gluster on Openstack anymore. That's fine. It's your money. As a community 
leader, proponent, and champion, however, Red Hat should have at least 
invested in finding an interested party to take over the effort - imho.


I think it is 100% disingenuous to position this as a surprise withdrawal of 
Gluster from Red Hat from openstack. The position we have had with what we have 
focused on with Gluster has been exceedingly clear for years.


As Eric pointed out, this was a warning in the Neutron code and was also in the 
release notes for prior openstack releases.










Would you please start a thread on the gluster-users and gluster-devel
 

Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian

On 05/30/2017 03:24 PM, Ric Wheeler wrote:

On 05/27/2017 03:02 AM, Joe Julian wrote:

On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:



On Wed, May 24, 2017 at 9:10 PM, Joe Julian > wrote:


Forwarded for posterity and follow-up.


 Forwarded Message 
Subject: Re: GlusterFS removal from Openstack Cinder
Date: Fri, 05 May 2017 21:07:27 +
From: Amye Scavarda  
To: Eric Harney  
, Joe

Julian  , Vijay Bellur
 
CC: Amye Scavarda  



Eric,
I'm sorry to hear this.
I'm reaching out internally (within Gluster CI team and CentOS 
CI which
supports Gluster) to get an idea of the level of effort we'll 
need to

provide to resolve this.
It'll take me a few days to get this, but this is on my radar. 
In the
meantime, is there somewhere I should be looking at for 
requirements to

meet this gateway?

Thanks!
-- amye

On Fri, May 5, 2017 at 16:09 Joe Julian mailto:m...@joejulian.name>> wrote:

On 05/05/2017 12:54 PM, Eric Harney wrote:
>> On 04/28/2017 12:41 PM, Joe Julian wrote:
>>> I learned, today, that GlusterFS was deprecated and 
removed from
>>> Cinder as one of our #gluster (freenode) users was 
attempting to
>>> upgrade openstack. I could find no rational nor 
discussion of that

>>> removal. Could you please educate me about that decision?
>>>
>
> Hi Joe,
>
> I can fill in on the rationale here.
>
> Keeping a driver in the Cinder tree requires running a CI 
platform to
> test that driver and report results against all patchsets 
submitted to
> Cinder.  This is a fairly large burden, which we could not 
meet

once the
> Gluster Cinder driver was no longer an active development 
target at

Red Hat.
>
> This was communicated via a warning issued by the driver 
for anyone
> running the OpenStack Newton code, and via the Cinder 
release notes for
> the Ocata release.  (I can see in retrospect that this was 
probably not

> communicated widely enough.)
>
> I apologize for not reaching out to the Gluster community 
about this.

>
> If someone from the Gluster world is interested in 
bringing this driver
> back, I can help coordinate there.  But it will require 
someone

stepping
> in in a big way to maintain it.
>
> Thanks,
> Eric

Ah, Red Hat's statement that the acquisition of InkTank was 
not an
abandonment of Gluster seems rather disingenuous now. I'm 
disappointed.



I am a Red Hat employee working on gluster and I am happy with the 
kind of investments the company did in GlusterFS. Still am. It is a 
pretty good company and really open. I never had any trouble saying 
something the management did is wrong when I strongly felt and they 
would give a decent reason for their decision.


Happy to hear that. Still looks like meddling to an outsider. Not the 
Gluster team's fault though (although more participation of the 
developers in community meetings would probably help with that 
feeling of being disconnected, in my own personal opinion).


As a community, each member needs to make sure that their specific use 
case has the resources it needs to flourish. If some team cares about 
Gluster in openstack, they should step forward and provide the 
engineering and hardware resources needed to make it succeed.


Red Hat has and continues to pour resources into Gluster - Gluster is 
thriving. We have loads of work going on with gluster in RHEV, 
Kubernetes, NFS Ganesha and Samba.


What we are not doing and that has been clear for many years now is to 
invest in Gluster in openstack.


Again, nobody communicated with either the Openstack nor the Gluster 
communities about this, short of deprecation warnings which are not the 
most effective way of reaching people (that may be wrong on the part of 
most users, but unfortunately it's a reality). Red Hat wasn't interested 
in investing in Gluster on Openstack anymore. That's fine. It's your 
money. As a community leader, proponent, and champion, however, Red Hat 
should have at least invested in finding an interested party to take 
over the effort - imho.








Would you please start a thread on the gluster-users and 
gluster-devel
mailing lists and see if there's anyone willing to take 
ownership of
this. I'm certainly willing to participate as well but my 
$dayjob has
gone more kubernetes than openstack so I have only my 
limited free time

that I can donate.


Do we know what would maintaining cinder as active entail? Did Eric 
get back to any of you?


Haven't heard

Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Ric Wheeler

On 05/27/2017 03:02 AM, Joe Julian wrote:

On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:



On Wed, May 24, 2017 at 9:10 PM, Joe Julian > wrote:


Forwarded for posterity and follow-up.


 Forwarded Message 
Subject:Re: GlusterFS removal from Openstack Cinder
Date:   Fri, 05 May 2017 21:07:27 +
From:   Amye Scavarda  
To: Eric Harney  , 
Joe
Julian  , Vijay Bellur
 
CC: Amye Scavarda  



Eric,
I'm sorry to hear this.
I'm reaching out internally (within Gluster CI team and CentOS CI which
supports Gluster) to get an idea of the level of effort we'll need to
provide to resolve this.
It'll take me a few days to get this, but this is on my radar. In the
meantime, is there somewhere I should be looking at for requirements to
meet this gateway?

Thanks!
-- amye

On Fri, May 5, 2017 at 16:09 Joe Julian mailto:m...@joejulian.name>> wrote:

On 05/05/2017 12:54 PM, Eric Harney wrote:
>> On 04/28/2017 12:41 PM, Joe Julian wrote:
>>> I learned, today, that GlusterFS was deprecated and removed from
>>> Cinder as one of our #gluster (freenode) users was attempting to
>>> upgrade openstack. I could find no rational nor discussion of that
>>> removal. Could you please educate me about that decision?
>>>
>
> Hi Joe,
>
> I can fill in on the rationale here.
>
> Keeping a driver in the Cinder tree requires running a CI platform to
> test that driver and report results against all patchsets submitted to
> Cinder.  This is a fairly large burden, which we could not meet
once the
> Gluster Cinder driver was no longer an active development target at
Red Hat.
>
> This was communicated via a warning issued by the driver for anyone
> running the OpenStack Newton code, and via the Cinder release notes 
for
> the Ocata release.  (I can see in retrospect that this was probably 
not
> communicated widely enough.)
>
> I apologize for not reaching out to the Gluster community about this.
>
> If someone from the Gluster world is interested in bringing this 
driver
> back, I can help coordinate there.  But it will require someone
stepping
> in in a big way to maintain it.
>
> Thanks,
> Eric

Ah, Red Hat's statement that the acquisition of InkTank was not an
abandonment of Gluster seems rather disingenuous now. I'm disappointed.


I am a Red Hat employee working on gluster and I am happy with the kind of 
investments the company did in GlusterFS. Still am. It is a pretty good 
company and really open. I never had any trouble saying something the 
management did is wrong when I strongly felt and they would give a decent 
reason for their decision.


Happy to hear that. Still looks like meddling to an outsider. Not the Gluster 
team's fault though (although more participation of the developers in 
community meetings would probably help with that feeling of being 
disconnected, in my own personal opinion).


As a community, each member needs to make sure that their specific use case has 
the resources it needs to flourish. If some team cares about Gluster in 
openstack, they should step forward and provide the engineering and hardware 
resources needed to make it succeed.


Red Hat has and continues to pour resources into Gluster - Gluster is thriving. 
We have loads of work going on with gluster in RHEV, Kubernetes, NFS Ganesha and 
Samba.


What we are not doing and that has been clear for many years now is to invest in 
Gluster in openstack.






Would you please start a thread on the gluster-users and gluster-devel
mailing lists and see if there's anyone willing to take ownership of
this. I'm certainly willing to participate as well but my $dayjob has
gone more kubernetes than openstack so I have only my limited free time
that I can donate.


Do we know what would maintaining cinder as active entail? Did Eric get back 
to any of you?


Haven't heard anything more, no.


Who in the community that is using gluster in openstack is willing to help with 
their own time and resources to meet the openstack requirements?


Ric

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.11.0: Tagged, packaged and released!

2017-05-30 Thread Shyam

Hi,

3.11.0 has been tagged and packages are built, thanks to all for your 
contributions and help.


With this the release tracker bug for 3.11.0 is closed and a 3.11.1 
tracker is open [1]. Future bugs for 3.11.1 need to be marked as a 
blocker against this.


Thanks,
Kaushal & Shyam

[1] 3.11.1 tracker: 
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing GlusterFS release 3.11.0 (Short Term Maintenance)

2017-05-30 Thread Shyam
The Gluster community is pleased to announce the release of Gluster 
3.11.0 (packages available at [1]).


This is a short term maintenance (STM) Gluster release that includes 
some substantial changes. he features revolve around, improvements to 
small file workloads, SE Linux support, Halo replication enhancement 
from Facebook, some usability and performance improvements, among other 
bug fixes.


The most notable features and changes are documented on the full release 
notes.


Moving forward, Gluster versions 3.11, 3.10 and 3.8 are actively maintained.

With the release of 3.12 in the future, active maintenance of this 
(3.11) STM release will be terminated.


Major changes and features (complete release notes can be found @ [2])

- Switched to storhaug for ganesha and samba high availability
- Added SELinux support for Gluster Volumes
- Several memory leaks are fixed in gfapi during graph switches
- get-state CLI is enhanced to provide client and brick capacity related 
information

- Ability to serve negative lookups from cache has been added
- New xlator to help developers detecting resource leaks has been added
- Feature for metadata-caching/small file performance is production ready
- "Parallel Readdir" feature introduced in 3.10.0 is production ready
- Object versioning is enabled only if bitrot is enabled
- Distribute layer provides more robust transactions during directory 
namespace operations

- gfapi extended readdirplus API has been added
- Improved adoption of standard refcounting functions across the code
- Performance improvements to rebalance have been made
- Halo Replication feature in AFR has been introduced
- FALLOCATE support with EC

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.11/3.11.0/


[2] Complete release notes: 
https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-30 Thread Prashanth Pai


- Original Message -
> From: "Amar Tumballi" 
> To: "Shyam" , "Vijay Bellur" , 
> "Prasanna Kalever" ,
> "Ram Edara" 
> Cc: "Gluster Devel" 
> Sent: Tuesday, 30 May, 2017 6:55:07 PM
> Subject: Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope
> 
> 
> I couldn't get time to get back on this list completely. While this looks
> mostly fine, there are few things I wanted to add.
> 
> * Check with few features which users are asking for long, and we are
> still not able to get to it. (Is there such a list somewhere? I prefer
> that to be migrated to github issues, if any).
> * What would be the status of projects depending on glusterfs repo by 4.0
> timeframe. Will we have a release of those projects also?
> 
> 
> * like gluster-block
> * like gluster-swift
> * like gluster-nagios*
> * And any other integration projects (like heketi / gdeploy etc).
> *
> Will these projects promise any features by that time, and does any of them
> require glusterfs changes?
> 
> 
> For these release, how about people sending release-notes and specs before
> starting to write code? :-)

+100

Specs aren't used today how they are supposed to be. The intent was to do a
documented design discussion involving component owners, reviewers before
writing code, to avoid surprises later. Specs are archived for historical
reference and is not meant to reflect the design of current implementation.

Not all BZs or github issues marked as RFE need specs.
Example https://review.gluster.org/#/c/17395

Patches implementing feature shouldn't be merged before the spec.
Example https://review.gluster.org/#/c/16436

Feel free to do away with the 'feature page' template that was inherited from
mediawiki pages. IMHO, contents of the spec should be similar to these detailed
examples:
https://docs.google.com/document/d/1m7pLHKnzqUjcb3RQo8wxaRzENyxq1h1r385jnwUGc2A/edit
https://docs.google.com/document/d/1bbxwjUmKNhA08wTmqJGkVd_KNCyaAMhpzx4dswokyyA/edit

Specs are for developer eyes only and is not targeted at users :)

> Also, should we let wider user community know about these list? or is it too
> early for it. They may ask for few features.
> 
> Regards,
> Amar
> 
> 
> On Thu, May 18, 2017 at 3:31 PM, Soumya Koduri < skod...@redhat.com > wrote:
> 
> 
> 
> 
> On 05/16/2017 02:10 PM, Kaushal M wrote:
> 
> 
> On 16 May 2017 06:16, "Shyam" < srang...@redhat.com
> > wrote:
> 
> Hi,
> 
> Let's start a bit early on 3.12 and 4.0 roadmap items, as there have
> been quite a few discussions around this in various meetups.
> 
> Here is what we are hearing (or have heard), so if you are working
> on any of these items, do put up your github issue, and let us know
> which release you are targeting these for.
> 
> If you are working on something that is not represented here, shout
> out, and we can get that added to the list of items in the upcoming
> releases.
> 
> Once we have a good collection slotted into the respective releases
> (on github), we can further announce the same in the users list as well.
> 
> 3.12:
> 1. Geo-replication to cloud (ie, s3 or glacier like storage target)
> 2. Basic level of throttling support on server side to manage the
> self-heal processes running.
> 3. Brick Multiplexing (Better support, more control)
> 4. GFID to path improvements
> 5. Resolve issues around disconnects and ping-timeouts
> 6. Halo with hybrid mode was supposed to be with 3.12
> 7. Procedures and code for +1 scaling the cluster?
> 8. Lookup-optimized turned on by default.
> 9. Thin client (or server side clustering) - phase 1.
> 
> 
> 10. > We also have the IPV6 patch by FB. This was supposed to go into 3.11
> but
> 
> 
> hasn't. The main thing blocking this is having an actual IPV6
> environment to test it in.
> 
> 11. Also we would like to propose support for leases and lock-owner via gfAPI
> in 3.12.
> 
> There are already POC patches sent by Poornima and Anoop. They need testing
> (have started) and updates. I have raised github-issue [1] to track the
> same.
> 
> 
> 
> 
> 
> 
> 4.0: (more thematic than actual features at the moment)
> 1. Separation of Management and Filesystem layers (aka GlusterD2
> related efforts)
> 2. Scaling Distribution logic
> 3. Better consistency with rename() and link() operations
> 4. Thin client || Clustering Logic on server side - Phase 2
> 5. Quota: re-look at optimal support
> 6. Improvements in debug-ability and more focus on testing coverage
> based on use-cases.
> 7. Zero-copy Writes
> 
> There was some effort put up by Sachin wrt this feature[2]. I would like to
> take it forward and propose the design changes if needed to be consumed by
> external applications (at-least existing ones like NFS-Ganesha or Samba).
> Github issue#[3]
> 
> Thanks,
> Soumya
> 
> [1] https://github.com/gluster/glusterfs/issues/213
> [2] https://review.gluster.org/#/c/14784/
> [3] https://github.com/gluster/glusterfs/issues/214
> 
> 
> 
> 
> Components moving out of support

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-30 Thread Amar Tumballi
On Tue, May 30, 2017 at 6:42 PM, Shyam  wrote:

> On 05/30/2017 05:28 AM, Krutika Dhananjay wrote:
>
>> You're right. With brick graphs, this will be a problem.
>>
>> Couple of options:
>>
>> 1. To begin with we identify points where we think it would be useful to
>> load io-stats in the brick graph and unconditionally have
>> glusterd-volgen load them in the volfile only at these places (not very
>> useful if we want to load trace xl though. Plus, this again makes
>> io-stats placement static).
>>
>
> I think this is needed (easier to get in), so +1 for this.
>
> Additionally, if this is chosen, we may need specific triggers for each
> instance, to target measuring the io-stats. IOW, generic io-stats can
> measure below FUSE (as an example) and below server-protocol. Then, we may
> want to enable io-threads (assuming this is one instance on the brick that
> is a static placement), or POSIX (or both/all) specifically, than have them
> enabled by default when io-stats is turned on (which is the current
> behaviour).
>
> Does this make sense?
>
>
>> 2. Embed the trace/io-stats functionality within xlator_t object itself,
>> and keep the accounting disabled by default. Only when required, the
>> user can perhaps enable the accounting options with volume-set or
>> through volume-profile start command for the brief period where they
>> want to capture the stats and disable it as soon as they're done.
>>
>
> This is a better longer term solution IMO. This way there is no further
> injection of io-stats xltor, and we get a lot more control on this better.
>
> Depending on time to completion, I would choose 1/2 as presented above.
> This is because, I see a lot of value in this and in answering user queries
> on what is slowing down their systems, so sooner we have this the better
> (say 3.12), if (2) is possible by then, more power to it.
>
>
I started this issue for the same reason:
https://github.com/gluster/glusterfs/issues/137

Also sent some patches :-) I am open to assist if any one wants to pick
this work, and take it forward.

Krutika, also, good if you can capture these discussions in github issues
so we can link to other such efforts if any.

Regards,
Amar


>
>> Let me know what you think.
>>
>> -Krutika
>>
>> On Fri, May 26, 2017 at 9:19 PM, Shyam > > wrote:
>>
>> On 05/26/2017 05:44 AM, Krutika Dhananjay wrote:
>>
>> Hi,
>>
>> debug/io-stats and debug/trace are immensely useful for isolating
>> translators that are performance bottlenecks and those that are
>> causing
>> iatt inconsistencies, respectively.
>>
>> There are other translators too under xlators/debug such as
>> error-gen,
>> which are useful for debugging/testing our code.
>>
>> The trick is to load these above and below one or more suspect
>> translators, run the test and analyse the output they dump and
>> debug
>> your problem.
>>
>> Unfortunately, there is no way to load these at specific points
>> in the
>> graph using the volume-set CLI as of today. Our only option is to
>> manually edit the volfile and restart the process and be
>> super-careful
>> not to perform *any* volume-{reset,set,profile} operation and
>> graph
>> switch operations in general that could rewrite the volfile,
>> wiping out
>> all previous edits to it.
>>
>> I propose the following CLI for achieving the same:
>>
>> # gluster volume set  {debug.trace, debug.io-stats,
>> debug.error-gen} 
>>
>> where  represents the name of the translator above
>> which you
>> want this translator loaded (as parent).
>>
>> For example, if i have a 2x2 dis-rep volume named testvol and I
>> want to
>> load trace above and below first child of DHT, I execute the
>> following
>> commands:
>>
>> # gluster volume set  debug.trace testvol-replicate-0
>> # gluster volume set  debug.trace testvol-client-0
>> # gluster volume set  debug.trace testvol-client-1
>>
>> The corresponding debug/trace translators will be named
>> testvol-replicate-0-trace-parent, testvol-client-0-trace-parent,
>> testvol-client-1-trace-parent and so on.
>>
>> To revert the change, the user simply uses volume-reset CLI:
>>
>> # gluster volume reset  testvol-replicate-0-trace-parent
>> # gluster volume reset  testvol-client-0-trace-parent
>> # gluster volume reset  testvol-client-1-trace-parent
>>
>> What should happen when the translator with a
>> trace/io-stat/error-gen
>> parent gets disabled?
>> Well glusterd should be made to take care to remove the trace xl
>> too
>> from the graph.
>>
>>
>>
>> Comments and suggestions welcome.
>>
>>
>> +1, dynamic placement of io-stats was something that I added to this

Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-30 Thread Amar Tumballi
I couldn't get time to get back on this list completely. While this looks
mostly fine, there are few things I wanted to add.

   - Check with few features which users are asking for long, and we are
   still not able to get to it. (Is there such a list somewhere? I prefer that
   to be migrated to github issues, if any).
   - What would be the status of projects depending on glusterfs repo by
   4.0 timeframe. Will we have a release of those projects also?
  - like gluster-block
  - like gluster-swift
  - like gluster-nagios*
  - And any other integration projects (like heketi / gdeploy etc).
   - Will these projects promise any features by that time, and does any of
   them require glusterfs changes?

For these release, how about people sending release-notes and specs before
starting to write code? :-)
Also, should we let wider user community know about these list? or is it
too early for it. They may ask for few features.

Regards,
Amar


On Thu, May 18, 2017 at 3:31 PM, Soumya Koduri  wrote:

>
>
> On 05/16/2017 02:10 PM, Kaushal M wrote:
>
>> On 16 May 2017 06:16, "Shyam" > > wrote:
>>
>> Hi,
>>
>> Let's start a bit early on 3.12 and 4.0 roadmap items, as there have
>> been quite a few discussions around this in various meetups.
>>
>> Here is what we are hearing (or have heard), so if you are working
>> on any of these items, do put up your github issue, and let us know
>> which release you are targeting these for.
>>
>> If you are working on something that is not represented here, shout
>> out, and we can get that added to the list of items in the upcoming
>> releases.
>>
>> Once we have a good collection slotted into the respective releases
>> (on github), we can further announce the same in the users list as
>> well.
>>
>> 3.12:
>> 1. Geo-replication to cloud (ie, s3 or glacier like storage target)
>> 2. Basic level of throttling support on server side to manage the
>> self-heal processes running.
>> 3. Brick Multiplexing (Better support, more control)
>> 4. GFID to path improvements
>> 5. Resolve issues around disconnects and ping-timeouts
>> 6. Halo with hybrid mode was supposed to be with 3.12
>> 7. Procedures and code for +1 scaling the cluster?
>> 8. Lookup-optimized turned on by default.
>> 9. Thin client (or server side clustering) - phase 1.
>>
>>
>> 10. > We also have the IPV6 patch by FB. This was supposed to go into
> 3.11 but
>
>> hasn't. The main thing blocking this is having an actual IPV6
>> environment to test it in.
>>
>
> 11. Also we would like to propose support for leases and lock-owner via
> gfAPI in 3.12.
>
> There are already POC patches sent by Poornima and Anoop. They need
> testing (have started) and updates. I have raised github-issue [1] to track
> the same.
>
>
>
>>
>> 4.0: (more thematic than actual features at the moment)
>> 1. Separation of Management and Filesystem layers (aka GlusterD2
>> related efforts)
>> 2. Scaling Distribution logic
>> 3. Better consistency with rename() and link() operations
>> 4. Thin client || Clustering Logic on server side - Phase 2
>> 5. Quota: re-look at optimal support
>> 6. Improvements in debug-ability and more focus on testing coverage
>> based on use-cases.
>>
>   7. Zero-copy Writes
>
> There was some effort put up by Sachin wrt this feature[2]. I would like
> to take it forward and propose the design changes if needed to be consumed
> by external applications (at-least existing ones like NFS-Ganesha or
> Samba). Github issue#[3]
>
> Thanks,
> Soumya
>
> [1] https://github.com/gluster/glusterfs/issues/213
> [2] https://review.gluster.org/#/c/14784/
> [3] https://github.com/gluster/glusterfs/issues/214
>
>
>> Components moving out of support in possibly 4.0
>> - Stripe translator
>> - AFR with just 2 subvolume (either use Arbiter or 3 way replicate)
>> - Re-validate few performance translator's presence.
>>
>> Thanks,
>> Shyam
>>
>>
>>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-30 Thread Shyam

On 05/30/2017 05:28 AM, Krutika Dhananjay wrote:

You're right. With brick graphs, this will be a problem.

Couple of options:

1. To begin with we identify points where we think it would be useful to
load io-stats in the brick graph and unconditionally have
glusterd-volgen load them in the volfile only at these places (not very
useful if we want to load trace xl though. Plus, this again makes
io-stats placement static).


I think this is needed (easier to get in), so +1 for this.

Additionally, if this is chosen, we may need specific triggers for each 
instance, to target measuring the io-stats. IOW, generic io-stats can 
measure below FUSE (as an example) and below server-protocol. Then, we 
may want to enable io-threads (assuming this is one instance on the 
brick that is a static placement), or POSIX (or both/all) specifically, 
than have them enabled by default when io-stats is turned on (which is 
the current behaviour).


Does this make sense?



2. Embed the trace/io-stats functionality within xlator_t object itself,
and keep the accounting disabled by default. Only when required, the
user can perhaps enable the accounting options with volume-set or
through volume-profile start command for the brief period where they
want to capture the stats and disable it as soon as they're done.


This is a better longer term solution IMO. This way there is no further 
injection of io-stats xltor, and we get a lot more control on this better.


Depending on time to completion, I would choose 1/2 as presented above. 
This is because, I see a lot of value in this and in answering user 
queries on what is slowing down their systems, so sooner we have this 
the better (say 3.12), if (2) is possible by then, more power to it.




Let me know what you think.

-Krutika

On Fri, May 26, 2017 at 9:19 PM, Shyam mailto:srang...@redhat.com>> wrote:

On 05/26/2017 05:44 AM, Krutika Dhananjay wrote:

Hi,

debug/io-stats and debug/trace are immensely useful for isolating
translators that are performance bottlenecks and those that are
causing
iatt inconsistencies, respectively.

There are other translators too under xlators/debug such as
error-gen,
which are useful for debugging/testing our code.

The trick is to load these above and below one or more suspect
translators, run the test and analyse the output they dump and debug
your problem.

Unfortunately, there is no way to load these at specific points
in the
graph using the volume-set CLI as of today. Our only option is to
manually edit the volfile and restart the process and be
super-careful
not to perform *any* volume-{reset,set,profile} operation and graph
switch operations in general that could rewrite the volfile,
wiping out
all previous edits to it.

I propose the following CLI for achieving the same:

# gluster volume set  {debug.trace, debug.io-stats,
debug.error-gen} 

where  represents the name of the translator above
which you
want this translator loaded (as parent).

For example, if i have a 2x2 dis-rep volume named testvol and I
want to
load trace above and below first child of DHT, I execute the
following
commands:

# gluster volume set  debug.trace testvol-replicate-0
# gluster volume set  debug.trace testvol-client-0
# gluster volume set  debug.trace testvol-client-1

The corresponding debug/trace translators will be named
testvol-replicate-0-trace-parent, testvol-client-0-trace-parent,
testvol-client-1-trace-parent and so on.

To revert the change, the user simply uses volume-reset CLI:

# gluster volume reset  testvol-replicate-0-trace-parent
# gluster volume reset  testvol-client-0-trace-parent
# gluster volume reset  testvol-client-1-trace-parent

What should happen when the translator with a
trace/io-stat/error-gen
parent gets disabled?
Well glusterd should be made to take care to remove the trace xl too
from the graph.



Comments and suggestions welcome.


+1, dynamic placement of io-stats was something that I added to this
spec [1] as well. So I am all for the change.

I have one problem though that bothered me when I wrote the spec,
currently brick vol files are static, and do not undergo a graph
change (or code is not yet ready to do that). So when we want to do
this on the bricks, what happens? Do you have solutions for the
same? I am interested, hence asking!

[1] Initial feature description for improved io-stats:

https://review.gluster.org/#/c/16558/1/under_review/Performance_monitoring_and_debugging.md





_

Re: [Gluster-devel] Release 3.11: Release notes updates!!! (1 day remaining)

2017-05-30 Thread Shyam

On 05/30/2017 12:43 AM, Niels de Vos wrote:

On Mon, May 29, 2017 at 10:31:22AM -0400, Shyam wrote:

On 05/25/2017 05:23 PM, Shyam wrote:

Hi,

If your name is on the following list, we need your help in updating the
release notes and thus closing out the final stages of 3.11 release.
Please do the needful by end of this week! (which means there is single
day remaining, unless you intend to do this over the weekend)

@pranith, @spalai, @kotresh, @niels, @poornima, @jiffin, @samikshan, @kaleb


I am yet to receive updates for 4 issues as below, I am writing the release
notes for these myself (within the next 2 hours possibly), and adding
appropriate members for review. There is a chance that the patch may get
merged before you have a chance to review the same, so if there are any
technical accuracy issues, it would have to be corrected asynchronously.


Thanks Shyam, and sorry for the late response! I've reviewed the notes
you have merged for the features that I was involved with, and am okay
with those details. https://review.gluster.org/17416 has been posted for
the SELinux feature.


Awesome! thank you, and with that we are ready to tag the release, which 
will be done in a short while.




Niels




3) Distritbute: [RFE] More robust transactions during directory
namespace operations. (@kotresh) (#191)



4) bitrot: [RFE] Enable object versioning only if bitrot is enabled.
(@kotresh) (#188)



5) New xlator to help developers detecting resource leaks (@niels) (#176)



8) SELinux support for Gluster Volumes (@niels @jiffin) (#55)


Shyam

"Releases are made better together"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2017-05-30-1b1f871c (master branch)

2017-05-30 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-05-30-1b1f871c
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-30 Thread Raghavendra Gowdappa


- Original Message -
> From: "Zhang Huan" 
> To: "Raghavendra G" 
> Cc: "GlusterFS Maintainers" , "Gluster Devel" 
> , "Kaushal Madappa"
> 
> Sent: Tuesday, May 30, 2017 3:33:09 PM
> Subject: Re: [Gluster-Maintainers] [Gluster-devel] Backport for "Add back 
> socket for polling of events
> immediately..."
> 
> 
> 
> 
> On 29 May 2017, at 11:16, Raghavendra G < raghaven...@gluster.com > wrote:
> 
> Replying to all queries here:
> 
> * Is it a bug or performance enhancement?
> Its a performance enhancement. No functionality is broken if this patch is
> not taken in.
> 
> * Are there performance numbers to validate the claim?
> https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c9
> 
> * Are there any existing users who need this enhancement?
> https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27
> 
> Though not sure what branch Zhang Huan is on. @Zhang your inputs are needed
> here.
> 
> We are currently on 3.8. Thus the performance number is based on 3.8.
> If you need more details, please let me know.

Thanks Zhang. The question was more on the lines whether you need backport of 
the fix to 3.8. Can you upgrade to recent releases (say 3.11.x or 3.10.x)?

> 
> 
> 
> 
> 
> * Do I think this patch _should_ go into any of the released branches?
> Personally, I don't feel strongly either way. I am fine with this patch not
> making into any of released branches. But, I do think there are users who
> are affected with this (Especially EC/Disperse configurations). If they want
> to stick to the released branches, pulling into released branches will help
> them. @Pranith/Xavi, what are your opinions on this?
> 
> regards,
> Raghavendra
> 
> On Sun, May 28, 2017 at 6:58 PM, Shyam < srang...@redhat.com > wrote:
> 
> 
> On 05/28/2017 09:24 AM, Atin Mukherjee wrote:
> 
> 
> 
> 
> On Sun, May 28, 2017 at 1:48 PM, Niels de Vos < nde...@redhat.com
> > wrote:
> 
> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
> > Or this one: https://review.gluster.org/15036 <
> > https://review.gluster.org/15036 >
> > 
> > This is backported to 3.8/10 and 3.11 and considering the size and impact
> > of
> > the change, I wanted to be sure that we are going to accept this across all
> > 3 releases?
> > 
> > @Du, would like your thoughts on this.
> > 
> > @niels, @kaushal, @talur, as release owners, could you weigh in as well
> > please.
> > 
> > I am thinking that we get this into 3.11.1 if there is agreement, and not
> > in
> > 3.11.0 as we are finalizing the release in 3 days, and this change looks
> > big, to get in at this time.
> 
> 
> Given 3.11 is going to be a new release, I'd recommend to get this fix
> in (if we have time). https://review.gluster.org/#/c/17402/ is dependent
> on this one.
> 
> It is not a fix Atin, it is a more fundamental change to request processing,
> with 2 days to the release, you want me to merge this?
> 
> Is there a *bug* that will surface without this change or is it a performance
> enhancement?
> 
> 
> 
> 
> > 
> > Further the change is actually an enhancement, and provides performance
> > benefits, so it is valid as a change itself, but I feel it is too late to
> > add to the current 3.11 release.
> 
> Indeed, and mostly we do not merge enhancements that are non-trivial to
> stable branches. Each change that we backport introduces the chance on
> regressions for users with their unknown (and possibly awkward)
> workloads.
> 
> The patch itself looks ok, but it is difficult to predict how the change
> affects current deployments. I prefer to be conservative and not have
> this merged in 3.8, at least for now. Are there any statistics in how
> performance is affected with this change? Having features like this only
> in newer versions might also convince users to upgrade sooner, 3.8 will
> only be supported until 3.12 (or 4.0) gets released, which is approx. 3
> months from now according to our schedule.
> 
> Niels
> 
> ___
> maintainers mailing list
> maintain...@gluster.org 
> http://lists.gluster.org/mailman/listinfo/maintainers
> < http://lists.gluster.org/mailman/listinfo/maintainers >
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> --
> Raghavendra G
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-30 Thread Zhang Huan
> On 29 May 2017, at 11:16, Raghavendra G  wrote:
> 
> Replying to all queries here:
> 
> * Is it a bug or performance enhancement?
>   Its a performance enhancement. No functionality is broken if this patch is 
> not taken in.
> 
> * Are there performance numbers to validate the claim?
>   https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c9 
> 
> 
> * Are there any existing users who need this enhancement?
>   https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27 
> 
> 
>   Though not sure what branch Zhang Huan is on. @Zhang your inputs are needed 
> here.

We are currently on 3.8. Thus the performance number is based on 3.8.
If you need more details, please let me know.

> 
> * Do I think this patch _should_ go into any of the released branches?
>   Personally, I don't feel strongly either way. I am fine with this patch not 
> making into any of released branches. But, I do think there are users who are 
> affected with this (Especially EC/Disperse configurations). If they want to 
> stick to the released branches, pulling into released branches will help 
> them. @Pranith/Xavi, what are your opinions on this?
> 
> regards,
> Raghavendra
> 
> On Sun, May 28, 2017 at 6:58 PM, Shyam  > wrote:
> On 05/28/2017 09:24 AM, Atin Mukherjee wrote:
> 
> 
> On Sun, May 28, 2017 at 1:48 PM, Niels de Vos  
> >> wrote:
> 
> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
> > Or this one: https://review.gluster.org/15036 
>   >
> >
> > This is backported to 3.8/10 and 3.11 and considering the size and 
> impact of
> > the change, I wanted to be sure that we are going to accept this across 
> all
> > 3 releases?
> >
> > @Du, would like your thoughts on this.
> >
> > @niels, @kaushal, @talur, as release owners, could you weigh in as well
> > please.
> >
> > I am thinking that we get this into 3.11.1 if there is agreement, and 
> not in
> > 3.11.0 as we are finalizing the release in 3 days, and this change looks
> > big, to get in at this time.
> 
> 
> Given 3.11 is going to be a new release, I'd recommend to get this fix
> in (if we have time). https://review.gluster.org/#/c/17402/ 
>  is dependent
> on this one.
> 
> It is not a fix Atin, it is a more fundamental change to request processing, 
> with 2 days to the release, you want me to merge this?
> 
> Is there a *bug* that will surface without this change or is it a performance 
> enhancement?
> 
> 
> >
> > Further the change is actually an enhancement, and provides performance
> > benefits, so it is valid as a change itself, but I feel it is too late 
> to
> > add to the current 3.11 release.
> 
> Indeed, and mostly we do not merge enhancements that are non-trivial to
> stable branches. Each change that we backport introduces the chance on
> regressions for users with their unknown (and possibly awkward)
> workloads.
> 
> The patch itself looks ok, but it is difficult to predict how the change
> affects current deployments. I prefer to be conservative and not have
> this merged in 3.8, at least for now. Are there any statistics in how
> performance is affected with this change? Having features like this only
> in newer versions might also convince users to upgrade sooner, 3.8 will
> only be supported until 3.12 (or 4.0) gets released, which is approx. 3
> months from now according to our schedule.
> 
> Niels
> 
> ___
> maintainers mailing list
> maintain...@gluster.org  
> >
> http://lists.gluster.org/mailman/listinfo/maintainers 
> 
>  >
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-devel 
> 
> 
> 
> 
> -- 
> Raghavendra G
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-30 Thread Krutika Dhananjay
Xavi and I would like to propose transaction framework for 4.0 as a stretch
goal.

-Krutika

On Tue, May 16, 2017 at 6:16 AM, Shyam  wrote:

> Hi,
>
> Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been
> quite a few discussions around this in various meetups.
>
> Here is what we are hearing (or have heard), so if you are working on any
> of these items, do put up your github issue, and let us know which release
> you are targeting these for.
>
> If you are working on something that is not represented here, shout out,
> and we can get that added to the list of items in the upcoming releases.
>
> Once we have a good collection slotted into the respective releases (on
> github), we can further announce the same in the users list as well.
>
> 3.12:
> 1. Geo-replication to cloud (ie, s3 or glacier like storage target)
> 2. Basic level of throttling support on server side to manage the
> self-heal processes running.
> 3. Brick Multiplexing (Better support, more control)
> 4. GFID to path improvements
> 5. Resolve issues around disconnects and ping-timeouts
> 6. Halo with hybrid mode was supposed to be with 3.12
> 7. Procedures and code for +1 scaling the cluster?
> 8. Lookup-optimized turned on by default.
> 9. Thin client (or server side clustering) - phase 1.
>
> 4.0: (more thematic than actual features at the moment)
> 1. Separation of Management and Filesystem layers (aka GlusterD2 related
> efforts)
> 2. Scaling Distribution logic
> 3. Better consistency with rename() and link() operations
> 4. Thin client || Clustering Logic on server side - Phase 2
> 5. Quota: re-look at optimal support
> 6. Improvements in debug-ability and more focus on testing coverage based
> on use-cases.
>
> Components moving out of support in possibly 4.0
> - Stripe translator
> - AFR with just 2 subvolume (either use Arbiter or 3 way replicate)
> - Re-validate few performance translator's presence.
>
> Thanks,
> Shyam
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-30 Thread Krutika Dhananjay
You're right. With brick graphs, this will be a problem.

Couple of options:

1. To begin with we identify points where we think it would be useful to
load io-stats in the brick graph and unconditionally have glusterd-volgen
load them in the volfile only at these places (not very useful if we want
to load trace xl though. Plus, this again makes io-stats placement static).

2. Embed the trace/io-stats functionality within xlator_t object itself,
and keep the accounting disabled by default. Only when required, the user
can perhaps enable the accounting options with volume-set or through
volume-profile start command for the brief period where they want to
capture the stats and disable it as soon as they're done.

Let me know what you think.

-Krutika

On Fri, May 26, 2017 at 9:19 PM, Shyam  wrote:

> On 05/26/2017 05:44 AM, Krutika Dhananjay wrote:
>
>> Hi,
>>
>> debug/io-stats and debug/trace are immensely useful for isolating
>> translators that are performance bottlenecks and those that are causing
>> iatt inconsistencies, respectively.
>>
>> There are other translators too under xlators/debug such as error-gen,
>> which are useful for debugging/testing our code.
>>
>> The trick is to load these above and below one or more suspect
>> translators, run the test and analyse the output they dump and debug
>> your problem.
>>
>> Unfortunately, there is no way to load these at specific points in the
>> graph using the volume-set CLI as of today. Our only option is to
>> manually edit the volfile and restart the process and be super-careful
>> not to perform *any* volume-{reset,set,profile} operation and graph
>> switch operations in general that could rewrite the volfile, wiping out
>> all previous edits to it.
>>
>> I propose the following CLI for achieving the same:
>>
>> # gluster volume set  {debug.trace, debug.io-stats,
>> debug.error-gen} 
>>
>> where  represents the name of the translator above which you
>> want this translator loaded (as parent).
>>
>> For example, if i have a 2x2 dis-rep volume named testvol and I want to
>> load trace above and below first child of DHT, I execute the following
>> commands:
>>
>> # gluster volume set  debug.trace testvol-replicate-0
>> # gluster volume set  debug.trace testvol-client-0
>> # gluster volume set  debug.trace testvol-client-1
>>
>> The corresponding debug/trace translators will be named
>> testvol-replicate-0-trace-parent, testvol-client-0-trace-parent,
>> testvol-client-1-trace-parent and so on.
>>
>> To revert the change, the user simply uses volume-reset CLI:
>>
>> # gluster volume reset  testvol-replicate-0-trace-parent
>> # gluster volume reset  testvol-client-0-trace-parent
>> # gluster volume reset  testvol-client-1-trace-parent
>>
>> What should happen when the translator with a trace/io-stat/error-gen
>> parent gets disabled?
>> Well glusterd should be made to take care to remove the trace xl too
>> from the graph.
>>
>>
>>
>> Comments and suggestions welcome.
>>
>
> +1, dynamic placement of io-stats was something that I added to this spec
> [1] as well. So I am all for the change.
>
> I have one problem though that bothered me when I wrote the spec,
> currently brick vol files are static, and do not undergo a graph change (or
> code is not yet ready to do that). So when we want to do this on the
> bricks, what happens? Do you have solutions for the same? I am interested,
> hence asking!
>
> [1] Initial feature description for improved io-stats:
> https://review.gluster.org/#/c/16558/1/under_review/Performa
> nce_monitoring_and_debugging.md
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel