Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Raghavendra G
+gluster-users

On Mon, May 29, 2017 at 8:46 AM, Raghavendra G 
wrote:

> Replying to all queries here:
>
> * Is it a bug or performance enhancement?
>   Its a performance enhancement. No functionality is broken if this patch
> is not taken in.
>
> * Are there performance numbers to validate the claim?
>   https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c9
>
> * Are there any existing users who need this enhancement?
>   https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27
>
>   Though not sure what branch Zhang Huan is on. @Zhang your inputs are
> needed here.
>
> * Do I think this patch _should_ go into any of the released branches?
>   Personally, I don't feel strongly either way. I am fine with this patch
> not making into any of released branches. But, I do think there are users
> who are affected with this (Especially EC/Disperse configurations). If they
> want to stick to the released branches, pulling into released branches will
> help them. @Pranith/Xavi, what are your opinions on this?
>
> regards,
> Raghavendra
>
> On Sun, May 28, 2017 at 6:58 PM, Shyam  wrote:
>
>> On 05/28/2017 09:24 AM, Atin Mukherjee wrote:
>>
>>>
>>>
>>> On Sun, May 28, 2017 at 1:48 PM, Niels de Vos >> > wrote:
>>>
>>> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
>>> > Or this one: https://review.gluster.org/15036 <
>>> https://review.gluster.org/15036>
>>> >
>>> > This is backported to 3.8/10 and 3.11 and considering the size and
>>> impact of
>>> > the change, I wanted to be sure that we are going to accept this
>>> across all
>>> > 3 releases?
>>> >
>>> > @Du, would like your thoughts on this.
>>> >
>>> > @niels, @kaushal, @talur, as release owners, could you weigh in as
>>> well
>>> > please.
>>> >
>>> > I am thinking that we get this into 3.11.1 if there is agreement,
>>> and not in
>>> > 3.11.0 as we are finalizing the release in 3 days, and this change
>>> looks
>>> > big, to get in at this time.
>>>
>>>
>>> Given 3.11 is going to be a new release, I'd recommend to get this fix
>>> in (if we have time). https://review.gluster.org/#/c/17402/ is dependent
>>> on this one.
>>>
>>
>> It is not a fix Atin, it is a more fundamental change to request
>> processing, with 2 days to the release, you want me to merge this?
>>
>> Is there a *bug* that will surface without this change or is it a
>> performance enhancement?
>>
>>
>>> >
>>> > Further the change is actually an enhancement, and provides
>>> performance
>>> > benefits, so it is valid as a change itself, but I feel it is too
>>> late to
>>> > add to the current 3.11 release.
>>>
>>> Indeed, and mostly we do not merge enhancements that are non-trivial
>>> to
>>> stable branches. Each change that we backport introduces the chance
>>> on
>>> regressions for users with their unknown (and possibly awkward)
>>> workloads.
>>>
>>> The patch itself looks ok, but it is difficult to predict how the
>>> change
>>> affects current deployments. I prefer to be conservative and not have
>>> this merged in 3.8, at least for now. Are there any statistics in how
>>> performance is affected with this change? Having features like this
>>> only
>>> in newer versions might also convince users to upgrade sooner, 3.8
>>> will
>>> only be supported until 3.12 (or 4.0) gets released, which is
>>> approx. 3
>>> months from now according to our schedule.
>>>
>>> Niels
>>>
>>> ___
>>> maintainers mailing list
>>> maintain...@gluster.org 
>>> http://lists.gluster.org/mailman/listinfo/maintainers
>>> 
>>>
>>>
>>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Raghavendra G
Replying to all queries here:

* Is it a bug or performance enhancement?
  Its a performance enhancement. No functionality is broken if this patch
is not taken in.

* Are there performance numbers to validate the claim?
  https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c9

* Are there any existing users who need this enhancement?
  https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27

  Though not sure what branch Zhang Huan is on. @Zhang your inputs are
needed here.

* Do I think this patch _should_ go into any of the released branches?
  Personally, I don't feel strongly either way. I am fine with this patch
not making into any of released branches. But, I do think there are users
who are affected with this (Especially EC/Disperse configurations). If they
want to stick to the released branches, pulling into released branches will
help them. @Pranith/Xavi, what are your opinions on this?

regards,
Raghavendra

On Sun, May 28, 2017 at 6:58 PM, Shyam  wrote:

> On 05/28/2017 09:24 AM, Atin Mukherjee wrote:
>
>>
>>
>> On Sun, May 28, 2017 at 1:48 PM, Niels de Vos > > wrote:
>>
>> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
>> > Or this one: https://review.gluster.org/15036 <
>> https://review.gluster.org/15036>
>> >
>> > This is backported to 3.8/10 and 3.11 and considering the size and
>> impact of
>> > the change, I wanted to be sure that we are going to accept this
>> across all
>> > 3 releases?
>> >
>> > @Du, would like your thoughts on this.
>> >
>> > @niels, @kaushal, @talur, as release owners, could you weigh in as
>> well
>> > please.
>> >
>> > I am thinking that we get this into 3.11.1 if there is agreement,
>> and not in
>> > 3.11.0 as we are finalizing the release in 3 days, and this change
>> looks
>> > big, to get in at this time.
>>
>>
>> Given 3.11 is going to be a new release, I'd recommend to get this fix
>> in (if we have time). https://review.gluster.org/#/c/17402/ is dependent
>> on this one.
>>
>
> It is not a fix Atin, it is a more fundamental change to request
> processing, with 2 days to the release, you want me to merge this?
>
> Is there a *bug* that will surface without this change or is it a
> performance enhancement?
>
>
>> >
>> > Further the change is actually an enhancement, and provides
>> performance
>> > benefits, so it is valid as a change itself, but I feel it is too
>> late to
>> > add to the current 3.11 release.
>>
>> Indeed, and mostly we do not merge enhancements that are non-trivial
>> to
>> stable branches. Each change that we backport introduces the chance on
>> regressions for users with their unknown (and possibly awkward)
>> workloads.
>>
>> The patch itself looks ok, but it is difficult to predict how the
>> change
>> affects current deployments. I prefer to be conservative and not have
>> this merged in 3.8, at least for now. Are there any statistics in how
>> performance is affected with this change? Having features like this
>> only
>> in newer versions might also convince users to upgrade sooner, 3.8
>> will
>> only be supported until 3.12 (or 4.0) gets released, which is approx.
>> 3
>> months from now according to our schedule.
>>
>> Niels
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers
>> 
>>
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Weekly Untriaged Bugs

2017-05-28 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1455912 / core: [Brick Multiplexing] heal info 
shows the status of the bricks as  "Transport endpoint is not connected" though 
bricks are up
https://bugzilla.redhat.com/1450567 / core: brick process cannot be started at 
the first time
https://bugzilla.redhat.com/1449416 / core: errno used incorrectly or 
misleadingly in error messages
https://bugzilla.redhat.com/1455907 / core: heal info shows the status of the 
bricks as  "Transport endpoint is not connected" though bricks are up
https://bugzilla.redhat.com/1450546 / core: Paths to some tools are hardcoded 
to /sbin or /usr/sbin
https://bugzilla.redhat.com/1454590 / core: run.c demo mode broken
https://bugzilla.redhat.com/1449232 / coreutils: race condition between 
client_ctx_get and client_ctx_set
https://bugzilla.redhat.com/1452766 / core: VM crashing but there's no apparent 
reason
https://bugzilla.redhat.com/1455049 / disperse: [GNFS+EC] Unable to release the 
lock when the other client tries to acquire the lock on the same file
https://bugzilla.redhat.com/1454701 / distribute: DHT: Pass errno as an 
argument to gf_msg
https://bugzilla.redhat.com/1450685 / doc: Document options to configure 
geo-replication for lower latency
https://bugzilla.redhat.com/1452865 / fuse: fuse mount dies.
https://bugzilla.redhat.com/1456265 / ganesha-nfs: SELinux blocks 
nfs-ganesha-lock service installed on Gluster
https://bugzilla.redhat.com/1450684 / geo-replication: Geo-replication delay 
cannot be configured to less than 3 seconds
https://bugzilla.redhat.com/1451937 / glusterd: Cannot probe nodes on ubuntu 
16.04 and not on centos 7.3
https://bugzilla.redhat.com/1455831 / glusterd: libglusterfs: updates old 
comment for 'arena_size'
https://bugzilla.redhat.com/1452961 / posix: [PATCH] incorrect xattr list 
handling on FreeBSD
https://bugzilla.redhat.com/1451184 / project-infrastructure: Assign reviewers 
based on who touched the file last
https://bugzilla.redhat.com/1449311 / read-ahead: 
[whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always 
failed on win7-32/win2012/win2k8R2 guest
https://bugzilla.redhat.com/1449313 / read-ahead: 
[whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always 
failed on win7-32/win2012/win2k8R2 guest
https://bugzilla.redhat.com/1451843 / replicate: gluster volume performance 
issue
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Shyam

On 05/28/2017 09:24 AM, Atin Mukherjee wrote:



On Sun, May 28, 2017 at 1:48 PM, Niels de Vos > wrote:

On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
> Or this one: https://review.gluster.org/15036 

>
> This is backported to 3.8/10 and 3.11 and considering the size and impact 
of
> the change, I wanted to be sure that we are going to accept this across 
all
> 3 releases?
>
> @Du, would like your thoughts on this.
>
> @niels, @kaushal, @talur, as release owners, could you weigh in as well
> please.
>
> I am thinking that we get this into 3.11.1 if there is agreement, and not 
in
> 3.11.0 as we are finalizing the release in 3 days, and this change looks
> big, to get in at this time.


Given 3.11 is going to be a new release, I'd recommend to get this fix
in (if we have time). https://review.gluster.org/#/c/17402/ is dependent
on this one.


It is not a fix Atin, it is a more fundamental change to request 
processing, with 2 days to the release, you want me to merge this?


Is there a *bug* that will surface without this change or is it a 
performance enhancement?




>
> Further the change is actually an enhancement, and provides performance
> benefits, so it is valid as a change itself, but I feel it is too late to
> add to the current 3.11 release.

Indeed, and mostly we do not merge enhancements that are non-trivial to
stable branches. Each change that we backport introduces the chance on
regressions for users with their unknown (and possibly awkward)
workloads.

The patch itself looks ok, but it is difficult to predict how the change
affects current deployments. I prefer to be conservative and not have
this merged in 3.8, at least for now. Are there any statistics in how
performance is affected with this change? Having features like this only
in newer versions might also convince users to upgrade sooner, 3.8 will
only be supported until 3.12 (or 4.0) gets released, which is approx. 3
months from now according to our schedule.

Niels

___
maintainers mailing list
maintain...@gluster.org 
http://lists.gluster.org/mailman/listinfo/maintainers




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Shyam

On 05/28/2017 04:18 AM, Niels de Vos wrote:

On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:

Or this one: https://review.gluster.org/15036

This is backported to 3.8/10 and 3.11 and considering the size and impact of
the change, I wanted to be sure that we are going to accept this across all
3 releases?

@Du, would like your thoughts on this.

@niels, @kaushal, @talur, as release owners, could you weigh in as well
please.

I am thinking that we get this into 3.11.1 if there is agreement, and not in
3.11.0 as we are finalizing the release in 3 days, and this change looks
big, to get in at this time.

Further the change is actually an enhancement, and provides performance
benefits, so it is valid as a change itself, but I feel it is too late to
add to the current 3.11 release.


Indeed, and mostly we do not merge enhancements that are non-trivial to
stable branches. Each change that we backport introduces the chance on
regressions for users with their unknown (and possibly awkward)
workloads.

The patch itself looks ok, but it is difficult to predict how the change
affects current deployments. I prefer to be conservative and not have
this merged in 3.8, at least for now. Are there any statistics in how
performance is affected with this change? Having features like this only
in newer versions might also convince users to upgrade sooner, 3.8 will
only be supported until 3.12 (or 4.0) gets released, which is approx. 3
months from now according to our schedule.


I agree, considering where we are at w.r.t 3.8 and the nature of this 
change, I think we should release this along with 3.12, and not backport 
the change.


I am anyway taking the decision of not adding this to 3.11.0 at the 
moment, considering the change is submitted very late, and I do not want 
to risk any destabilization of the build at the moment.




Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Atin Mukherjee
On Sun, May 28, 2017 at 1:48 PM, Niels de Vos  wrote:

> On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote:
> > Or this one: https://review.gluster.org/15036
> >
> > This is backported to 3.8/10 and 3.11 and considering the size and
> impact of
> > the change, I wanted to be sure that we are going to accept this across
> all
> > 3 releases?
> >
> > @Du, would like your thoughts on this.
> >
> > @niels, @kaushal, @talur, as release owners, could you weigh in as well
> > please.
> >
> > I am thinking that we get this into 3.11.1 if there is agreement, and
> not in
> > 3.11.0 as we are finalizing the release in 3 days, and this change looks
> > big, to get in at this time.
>

Given 3.11 is going to be a new release, I'd recommend to get this fix in
(if we have time). https://review.gluster.org/#/c/17402/ is dependent on
this one.

>
> > Further the change is actually an enhancement, and provides performance
> > benefits, so it is valid as a change itself, but I feel it is too late to
> > add to the current 3.11 release.
>
> Indeed, and mostly we do not merge enhancements that are non-trivial to
> stable branches. Each change that we backport introduces the chance on
> regressions for users with their unknown (and possibly awkward)
> workloads.
>
> The patch itself looks ok, but it is difficult to predict how the change
> affects current deployments. I prefer to be conservative and not have
> this merged in 3.8, at least for now. Are there any statistics in how
> performance is affected with this change? Having features like this only
> in newer versions might also convince users to upgrade sooner, 3.8 will
> only be supported until 3.12 (or 4.0) gets released, which is approx. 3
> months from now according to our schedule.
>
> Niels
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-28 Thread Niels de Vos
On Sat, May 27, 2017 at 08:48:00PM -0400, Vijay Bellur wrote:
> On Sat, May 27, 2017 at 3:02 AM, Joe Julian  wrote:
> 
> > On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
> >
> >
> >
> > On Wed, May 24, 2017 at 9:10 PM, Joe Julian  wrote:
> >
> >> Forwarded for posterity and follow-up.
> >>
> >>  Forwarded Message 
> >> Subject: Re: GlusterFS removal from Openstack Cinder
> >> Date: Fri, 05 May 2017 21:07:27 +
> >> From: Amye Scavarda  
> >> To: Eric Harney  , Joe Julian
> >>  , Vijay Bellur
> >>  
> >> CC: Amye Scavarda  
> >>
> >> Eric,
> >> I'm sorry to hear this.
> >> I'm reaching out internally (within Gluster CI team and CentOS CI which
> >> supports Gluster) to get an idea of the level of effort we'll need to
> >> provide to resolve this.
> >> It'll take me a few days to get this, but this is on my radar. In the
> >> meantime, is there somewhere I should be looking at for requirements to
> >> meet this gateway?
> >>
> >> Thanks!
> >> -- amye
> >>
> >> On Fri, May 5, 2017 at 16:09 Joe Julian  wrote:
> >>
> >>> On 05/05/2017 12:54 PM, Eric Harney wrote:
> >>> >> On 04/28/2017 12:41 PM, Joe Julian wrote:
> >>> >>> I learned, today, that GlusterFS was deprecated and removed from
> >>> >>> Cinder as one of our #gluster (freenode) users was attempting to
> >>> >>> upgrade openstack. I could find no rational nor discussion of that
> >>> >>> removal. Could you please educate me about that decision?
> >>> >>>
> >>> >
> >>> > Hi Joe,
> >>> >
> >>> > I can fill in on the rationale here.
> >>> >
> >>> > Keeping a driver in the Cinder tree requires running a CI platform to
> >>> > test that driver and report results against all patchsets submitted to
> >>> > Cinder.  This is a fairly large burden, which we could not meet once
> >>> the
> >>> > Gluster Cinder driver was no longer an active development target at
> >>> Red Hat.
> >>> >
> >>> > This was communicated via a warning issued by the driver for anyone
> >>> > running the OpenStack Newton code, and via the Cinder release notes for
> >>> > the Ocata release.  (I can see in retrospect that this was probably not
> >>> > communicated widely enough.)
> >>> >
> >>> > I apologize for not reaching out to the Gluster community about this.
> >>> >
> >>> > If someone from the Gluster world is interested in bringing this driver
> >>> > back, I can help coordinate there.  But it will require someone
> >>> stepping
> >>> > in in a big way to maintain it.
> >>> >
> >>> > Thanks,
> >>> > Eric
> >>>
> >>> Ah, Red Hat's statement that the acquisition of InkTank was not an
> >>> abandonment of Gluster seems rather disingenuous now. I'm disappointed.
> >>>
> >>
> > I am a Red Hat employee working on gluster and I am happy with the kind of
> > investments the company did in GlusterFS. Still am. It is a pretty good
> > company and really open. I never had any trouble saying something the
> > management did is wrong when I strongly felt and they would give a decent
> > reason for their decision.
> >
> >
> > Happy to hear that. Still looks like meddling to an outsider. Not the
> > Gluster team's fault though (although more participation of the developers
> > in community meetings would probably help with that feeling of being
> > disconnected, in my own personal opinion).
> >
> >
> >
> >>
> >>> Would you please start a thread on the gluster-users and gluster-devel
> >>> mailing lists and see if there's anyone willing to take ownership of
> >>> this. I'm certainly willing to participate as well but my $dayjob has
> >>> gone more kubernetes than openstack so I have only my limited free time
> >>> that I can donate.
> >>>
> >>
> > Do we know what would maintaining cinder as active entail? Did Eric get
> > back to any of you?
> >
> >
> > Haven't heard anything more, no.
> >
> >
>  Policies for maintaining an active driver in cinder can be found at [1]
> and [2]. We will need some work to let the driver be active again (after a
> revert of the commit that removed the driver from cinder) and provide CI
> support as entailed in [2].
> 
> I will co-ordinate further internal discussions within Red Hat on this
> topic and provide an update soon on how we can proceed here.

The hole discussion is about integration of Gluster in . I
would really appreciate to see the discussion in the public, on the
integrat...@gluster.org list. This is a low-traffic list, specially
dedicated to these kind of topics. Having the discussion in archives
will surely help to track the history of decisions that were taken. We
should try hard to prevent (removal of) integration surprises in the
future.

Let me know if you need any assistance with this, I offered to start the
discussion during last IRC meeting and am willing to follow