[Gluster-devel] Jenkins outage on Jun 26

2017-06-22 Thread Nigel Babu
Hello folks,

We'll also have a short Jenkins outage on 26 June 2017, for a Jenkins plugin
installation and upgrade.

Date: 26th June 2017
Time: 0230 UTC (2230 EDT / 0430 CEST / 0800 IST)
Duration: 1h

Jenkins will be in a quiet time from 1h before the outage where no new builds
will be allowed to start.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-22 Thread Kotresh Hiremath Ravishankar
+1

On Fri, Jun 23, 2017 at 10:44 AM, Amar Tumballi  wrote:

>
>
> On Thu, Jun 22, 2017 at 9:22 PM, Atin Mukherjee 
> wrote:
>
>> I have highlighted about this failure earlier at [1]
>>
>> [1] http://lists.gluster.org/pipermail/gluster-devel/2017-June/
>> 053042.html
>>
>>
> If its stopping us from adding important features / bring stability, lets
> document 'Crypt' (ie, encryption xlator) has issues, and remove this test
> case from running?
>
> I see that even in recent maintainers meeting also, no one volunteered to
> fix encryption translator. So, I am fine with taking it out for now. Anyone
> has objections?
>
> -Amar
>
>
>> On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote:
>>
>>> Hi
>>>
>>> ./tests/encryption/crypt.t fails regression on
>>> https://build.gluster.org/job/centos6-regression/5112/consoleFull
>>> with a core. It doesn't seem to be related to the patch. Can somebody
>>> take a look at it? Following is the backtrace.
>>>
>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
>>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/atom.c:96
>>> 96
>>> /home/jenkins/root/workspace/centos6-regression/xlators/encryption/crypt/src/atom.c:
>>> No such file or directory.
>>> [Current thread is 1 (LWP 1082)]
>>> (gdb) bt
>>> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
>>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/atom.c:96
>>> #1  0x7effbe9ef9d5 in offset_at_data_tail (frame=0x7effa4001960,
>>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/atom.c:110
>>> #2  0x7effbe9f0729 in rmw_partial_block (frame=0x7effa4001960,
>>> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
>>> count=0, stbuf=0x7effb402da18,
>>> iobref=0x7effb804a0d0, atom=0x7effbec106a0 ) at
>>> /home/jenkins/root/workspace/centos6-regression/xlators/encr
>>> yption/crypt/src/atom.c:523
>>> #3  0x7effbe9f1339 in rmw_data_tail (frame=0x7effa4001960,
>>> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
>>> count=0, stbuf=0x7effb402da18,
>>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/atom.c:716
>>> #4  0x7effbea03684 in __crypt_readv_done (frame=0x7effb4010050,
>>> cookie=0x0, this=0x7effb800b870, op_ret=0, op_errno=0, xdata=0x0)
>>> at /home/jenkins/root/workspace/centos6-regression/xlators/encr
>>> yption/crypt/src/crypt.c:3460
>>> #5  0x7effbea0375f in crypt_readv_done (frame=0x7effb4010050,
>>> this=0x7effb800b870) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/crypt.c:3487
>>> #6  0x7effbea03b25 in put_one_call_readv (frame=0x7effb4010050,
>>> this=0x7effb800b870) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/crypt.c:3514
>>> #7  0x7effbe9f286e in crypt_readv_cbk (frame=0x7effb4010050,
>>> cookie=0x7effb4010160, this=0x7effb800b870, op_ret=0, op_errno=2,
>>> vec=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
>>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/encryption/crypt/src/crypt.c:371
>>> #8  0x7effbec9cb4b in dht_readv_cbk (frame=0x7effb4010160,
>>> cookie=0x7effb400ff40, this=0x7effb800a200, op_ret=0, op_errno=2,
>>> vector=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
>>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>>> entos6-regression/xlators/cluster/dht/src/dht-inode-read.c:479
>>> #9  0x7effbeefff83 in client3_3_readv_cbk (req=0x7effb40048b0,
>>> iov=0x7effb40048f0, count=2, myframe=0x7effb400ff40)
>>> at /home/jenkins/root/workspace/centos6-regression/xlators/prot
>>> ocol/client/src/client-rpc-fops.c:2997
>>> #10 0x7effcc7b681e in rpc_clnt_handle_reply (clnt=0x7effb803eb70,
>>> pollin=0x7effb807b350) at /home/jenkins/root/workspace/c
>>> entos6-regression/rpc/rpc-lib/src/rpc-clnt.c:793
>>> #11 0x7effcc7b6de8 in rpc_clnt_notify (trans=0x7effb803ed10,
>>> mydata=0x7effb803eba0, event=RPC_TRANSPORT_MSG_RECEIVED,
>>> data=0x7effb807b350)
>>> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-lib/
>>> src/rpc-clnt.c:986
>>> #12 0x7effcc7b2e0c in rpc_transport_notify (this=0x7effb803ed10,
>>> event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7effb807b350)
>>> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-lib/
>>> src/rpc-transport.c:538
>>> #13 0x7effc136458a in socket_event_poll_in (this=0x7effb803ed10,
>>> notify_handled=_gf_true) at /home/jenkins/root/workspace/c
>>> entos6-regression/rpc/rpc-transport/socket/src/socket.c:2315
>>> #14 0x7effc1364bd5 in socket_event_handler (fd=10, idx=2, 

Re: [Gluster-devel] reagarding backport information while porting patches

2017-06-22 Thread Amar Tumballi
On Fri, Jun 23, 2017 at 9:55 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Fri, Jun 23, 2017 at 9:37 AM, Ravishankar N 
> wrote:
>
>> On 06/23/2017 09:15 AM, Pranith Kumar Karampuri wrote:
>>
>> hi,
>>  Now that we are doing backports with same Change-Id, we can find the
>> patches and their backports both online and in the tree without any extra
>> information in the commit message. So shall we stop adding text similar to:
>>
>> > Reviewed-on: https://review.gluster.org/17414
>>
>>
>> Sometimes I combine 2 commits from master (typically commit 2 which fixes
>> a bug in commit 1) in to a single patch while backporting. The change ID is
>> not the same in that case and I explicitly mention the 2 patch urls in the
>> squashed commit sent to the release branch.  So in those cases, some way to
>> trace back to the patches in master is helpful. Otherwise I think it is
>> fair to omit it.
>>
>
> Ah! makes sense. Maybe for exceptions, let us use this but as a rule maybe
> it doesn't seem like a bad idea to omit. Let us also hear from others.
>

For easier one click approach, I guess, one can keep the 'Reviewed-on:'
line with URL. All other info is just extra bytes IMO.

-Amar


>
>
>> > Smoke: Gluster Build System 
>> > Reviewed-by: Pranith Kumar Karampuri 
>> > Tested-by: Pranith Kumar Karampuri 
>> > NetBSD-regression: NetBSD Build System 
>> > Reviewed-by: Amar Tumballi 
>> > CentOS-regression: Gluster Build System 
>> (cherry picked from commit de92c363c95d16966dbcc9d8763fd4448dd84d13)
>>
>> in the patches?
>>
>> Do you see any other value from this information that I might be missing?
>>
>> --
>> Pranith
>>
>>
>> ___
>> Gluster-devel mailing 
>> listGluster-devel@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-22 Thread Amar Tumballi
On Thu, Jun 22, 2017 at 9:22 PM, Atin Mukherjee  wrote:

> I have highlighted about this failure earlier at [1]
>
> [1] http://lists.gluster.org/pipermail/gluster-devel/2017-June/053042.html
>
>
If its stopping us from adding important features / bring stability, lets
document 'Crypt' (ie, encryption xlator) has issues, and remove this test
case from running?

I see that even in recent maintainers meeting also, no one volunteered to
fix encryption translator. So, I am fine with taking it out for now. Anyone
has objections?

-Amar


> On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi
>>
>> ./tests/encryption/crypt.t fails regression on
>> https://build.gluster.org/job/centos6-regression/5112/consoleFull
>> with a core. It doesn't seem to be related to the patch. Can somebody
>> take a look at it? Following is the backtrace.
>>
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/atom.c:96
>> 96
>> /home/jenkins/root/workspace/centos6-regression/xlators/encryption/crypt/src/atom.c:
>> No such file or directory.
>> [Current thread is 1 (LWP 1082)]
>> (gdb) bt
>> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/atom.c:96
>> #1  0x7effbe9ef9d5 in offset_at_data_tail (frame=0x7effa4001960,
>> object=0x7effb000ac28) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/atom.c:110
>> #2  0x7effbe9f0729 in rmw_partial_block (frame=0x7effa4001960,
>> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
>> count=0, stbuf=0x7effb402da18,
>> iobref=0x7effb804a0d0, atom=0x7effbec106a0 ) at
>> /home/jenkins/root/workspace/centos6-regression/xlators/encr
>> yption/crypt/src/atom.c:523
>> #3  0x7effbe9f1339 in rmw_data_tail (frame=0x7effa4001960,
>> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
>> count=0, stbuf=0x7effb402da18,
>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/atom.c:716
>> #4  0x7effbea03684 in __crypt_readv_done (frame=0x7effb4010050,
>> cookie=0x0, this=0x7effb800b870, op_ret=0, op_errno=0, xdata=0x0)
>> at /home/jenkins/root/workspace/centos6-regression/xlators/encr
>> yption/crypt/src/crypt.c:3460
>> #5  0x7effbea0375f in crypt_readv_done (frame=0x7effb4010050,
>> this=0x7effb800b870) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/crypt.c:3487
>> #6  0x7effbea03b25 in put_one_call_readv (frame=0x7effb4010050,
>> this=0x7effb800b870) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/crypt.c:3514
>> #7  0x7effbe9f286e in crypt_readv_cbk (frame=0x7effb4010050,
>> cookie=0x7effb4010160, this=0x7effb800b870, op_ret=0, op_errno=2,
>> vec=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/encryption/crypt/src/crypt.c:371
>> #8  0x7effbec9cb4b in dht_readv_cbk (frame=0x7effb4010160,
>> cookie=0x7effb400ff40, this=0x7effb800a200, op_ret=0, op_errno=2,
>> vector=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
>> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/c
>> entos6-regression/xlators/cluster/dht/src/dht-inode-read.c:479
>> #9  0x7effbeefff83 in client3_3_readv_cbk (req=0x7effb40048b0,
>> iov=0x7effb40048f0, count=2, myframe=0x7effb400ff40)
>> at /home/jenkins/root/workspace/centos6-regression/xlators/prot
>> ocol/client/src/client-rpc-fops.c:2997
>> #10 0x7effcc7b681e in rpc_clnt_handle_reply (clnt=0x7effb803eb70,
>> pollin=0x7effb807b350) at /home/jenkins/root/workspace/c
>> entos6-regression/rpc/rpc-lib/src/rpc-clnt.c:793
>> #11 0x7effcc7b6de8 in rpc_clnt_notify (trans=0x7effb803ed10,
>> mydata=0x7effb803eba0, event=RPC_TRANSPORT_MSG_RECEIVED,
>> data=0x7effb807b350)
>> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-lib/
>> src/rpc-clnt.c:986
>> #12 0x7effcc7b2e0c in rpc_transport_notify (this=0x7effb803ed10,
>> event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7effb807b350)
>> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-lib/
>> src/rpc-transport.c:538
>> #13 0x7effc136458a in socket_event_poll_in (this=0x7effb803ed10,
>> notify_handled=_gf_true) at /home/jenkins/root/workspace/c
>> entos6-regression/rpc/rpc-transport/socket/src/socket.c:2315
>> #14 0x7effc1364bd5 in socket_event_handler (fd=10, idx=2, gen=1,
>> data=0x7effb803ed10, poll_in=1, poll_out=0, poll_err=0)
>> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-tran
>> sport/socket/src/socket.c:2467
>> #15 0x7effcca6216e in 

Re: [Gluster-devel] reagarding backport information while porting patches

2017-06-22 Thread Pranith Kumar Karampuri
On Fri, Jun 23, 2017 at 9:37 AM, Ravishankar N 
wrote:

> On 06/23/2017 09:15 AM, Pranith Kumar Karampuri wrote:
>
> hi,
>  Now that we are doing backports with same Change-Id, we can find the
> patches and their backports both online and in the tree without any extra
> information in the commit message. So shall we stop adding text similar to:
>
> > Reviewed-on: https://review.gluster.org/17414
>
>
> Sometimes I combine 2 commits from master (typically commit 2 which fixes
> a bug in commit 1) in to a single patch while backporting. The change ID is
> not the same in that case and I explicitly mention the 2 patch urls in the
> squashed commit sent to the release branch.  So in those cases, some way to
> trace back to the patches in master is helpful. Otherwise I think it is
> fair to omit it.
>

Ah! makes sense. Maybe for exceptions, let us use this but as a rule maybe
it doesn't seem like a bad idea to omit. Let us also hear from others.


> > Smoke: Gluster Build System 
> > Reviewed-by: Pranith Kumar Karampuri 
> > Tested-by: Pranith Kumar Karampuri 
> > NetBSD-regression: NetBSD Build System 
> > Reviewed-by: Amar Tumballi 
> > CentOS-regression: Gluster Build System 
> (cherry picked from commit de92c363c95d16966dbcc9d8763fd4448dd84d13)
>
> in the patches?
>
> Do you see any other value from this information that I might be missing?
>
> --
> Pranith
>
>
> ___
> Gluster-devel mailing 
> listGluster-devel@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] reagarding backport information while porting patches

2017-06-22 Thread Ravishankar N

On 06/23/2017 09:15 AM, Pranith Kumar Karampuri wrote:

hi,
 Now that we are doing backports with same Change-Id, we can find 
the patches and their backports both online and in the tree without 
any extra information in the commit message. So shall we stop adding 
text similar to:


> Reviewed-on: https://review.gluster.org/17414


Sometimes I combine 2 commits from master (typically commit 2 which 
fixes a bug in commit 1) in to a single patch while backporting. The 
change ID is not the same in that case and I explicitly mention the 2 
patch urls in the squashed commit sent to the release branch.  So in 
those cases, some way to trace back to the patches in master is helpful. 
Otherwise I think it is fair to omit it.


> Smoke: Gluster Build System >
> Reviewed-by: Pranith Kumar Karampuri >
> Tested-by: Pranith Kumar Karampuri >
> NetBSD-regression: NetBSD Build System 
>
> Reviewed-by: Amar Tumballi >
> CentOS-regression: Gluster Build System 
>

(cherry picked from commit de92c363c95d16966dbcc9d8763fd4448dd84d13)

in the patches?

Do you see any other value from this information that I might be missing?

--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] reagarding backport information while porting patches

2017-06-22 Thread Pranith Kumar Karampuri
hi,
 Now that we are doing backports with same Change-Id, we can find the
patches and their backports both online and in the tree without any extra
information in the commit message. So shall we stop adding text similar to:

> Reviewed-on: https://review.gluster.org/17414
> Smoke: Gluster Build System 
> Reviewed-by: Pranith Kumar Karampuri 
> Tested-by: Pranith Kumar Karampuri 
> NetBSD-regression: NetBSD Build System 
> Reviewed-by: Amar Tumballi 
> CentOS-regression: Gluster Build System 
(cherry picked from commit de92c363c95d16966dbcc9d8763fd4448dd84d13)

in the patches?

Do you see any other value from this information that I might be missing?

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-22 Thread Pranith Kumar Karampuri
On Wed, Jun 21, 2017 at 9:12 PM, Shyam  wrote:

> On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote:
>
>>
>>
>> On Tue, Jun 20, 2017 at 7:37 PM, Shyam > > wrote:
>>
>> Hi,
>>
>> Release tagging has been postponed by a day to accommodate a fix for
>> a regression that has been introduced between 3.11.0 and 3.11.1 (see
>> [1] for details).
>>
>> As a result 3.11.1 will be tagged on the 21st June as of now
>> (further delays will be notified to the lists appropriately).
>>
>>
>> The required patches landed upstream for review and are undergoing
>> review. Could we do the tagging tomorrow? We don't want to rush the
>> patches to make sure we don't introduce any new bugs at this time.
>>
>
> Agreed, considering the situation we would be tagging the release tomorrow
> (June-22nd 2017).
>

Status of afr/ec patches:

EC patch on master: https://review.gluster.org/17594
EC patch on release-3.11: https://review.gluster.org/17615

Master patch is already merged, and 3.11 patch:
https://review.gluster.org/17602

DHT patch: https://review.gluster.org/17595, I guess this patch needs
review as well as regression results.

At the moment we are awaiting regression results of all these patches.


>
>
>>
>>
>> Thanks,
>> Shyam
>>
>> [1] Bug awaiting fix:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1463250
>> 
>>
>> "Releases are made better together"
>>
>> On 06/06/2017 09:24 AM, Shyam wrote:
>>
>> Hi,
>>
>> It's time to prepare the 3.11.1 release, which falls on the 20th
>> of
>> each month [4], and hence would be June-20th-2017 this time
>> around.
>>
>> This mail is to call out the following,
>>
>> 1) Are there any pending *blocker* bugs that need to be tracked
>> for
>> 3.11.1? If so mark them against the provided tracker [1] as
>> blockers
>> for the release, or at the very least post them as a response to
>> this
>> mail
>>
>> 2) Pending reviews in the 3.11 dashboard will be part of the
>> release,
>> *iff* they pass regressions and have the review votes, so use the
>> dashboard [2] to check on the status of your patches to 3.11 and
>> get
>> these going
>>
>> 3) Empty release notes are posted here [3], if there are any
>> specific
>> call outs for 3.11 beyond bugs, please update the review, or
>> leave a
>> comment in the review, for us to pick it up
>>
>> Thanks,
>> Shyam/Kaushal
>>
>> [1] Release bug tracker:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1
>> 
>>
>> [2] 3.11 review dashboard:
>> https://review.gluster.org/#/projects/glusterfs,dashboards/d
>> ashboard:3-11-dashboard
>> > dashboard:3-11-dashboard>
>>
>>
>> [3] Release notes WIP: https://review.gluster.org/17480
>> 
>>
>> [4] Release calendar:
>> https://www.gluster.org/community/release-schedule/
>> 
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>> 
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/maintainers
>> 
>>
>>
>>
>>
>> --
>> Pranith
>>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] some gluster-block fixes

2017-06-22 Thread Prasanna Kalever
On Thu, Jun 22, 2017 at 10:25 PM, Michael Adam  wrote:
> On 2017-06-22 at 16:26 +0200, Michael Adam wrote:
>> Hi all,
>>
>> I have created a few patches to gluster-block.
>> But am  a little bit at a loss how to create
>> gerrit review requests. Hence I have created
>> github PRs.
>>
>> https://github.com/gluster/gluster-block/pull/29
>> https://github.com/gluster/gluster-block/pull/30
>>
>> Prasanna, I hope you can convert those to gerrit
>> again... :-D
>
> Ok, thanks to Niels, I was able to move them to gerrit:
>
> https://review.gluster.org/#/c/17613/
> https://review.gluster.org/#/c/17614/
>
> Updated according to review commits on github.

Thanks Michael, will take a look.

--
Prasanna

>
> Cheers - Michael
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] some gluster-block fixes

2017-06-22 Thread Michael Adam
On 2017-06-22 at 16:26 +0200, Michael Adam wrote:
> Hi all,
> 
> I have created a few patches to gluster-block.
> But am  a little bit at a loss how to create
> gerrit review requests. Hence I have created
> github PRs.
> 
> https://github.com/gluster/gluster-block/pull/29
> https://github.com/gluster/gluster-block/pull/30
> 
> Prasanna, I hope you can convert those to gerrit
> again... :-D

Ok, thanks to Niels, I was able to move them to gerrit:

https://review.gluster.org/#/c/17613/
https://review.gluster.org/#/c/17614/

Updated according to review commits on github.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing the time to test patches which doesn't modify code

2017-06-22 Thread Nigel Babu
On Wed, Jun 21, 2017 at 10:35:05AM +0530, Amar Tumballi wrote:
> Today, any changes to glusterfs code base (other than 'doc/') triggers the
> regression runs when +1 Verified is voted. But we noticed that patches
> which are changes in 'extras/' or just updating README file, need not run
> regressions.
>
> So, Nigel proposed the idea of a .testignore file (like .gitignore)[1].
> Content in which are paths for files to ignore from testing. If a patch has
> all the files belonging to this file, then the tests wouldn't be triggered.
>
> Anyone sending patches in future, and the patch needs to add a new file,
> see if you need to update .testignore file too.
>
> [1] - https://review.gluster.org/17522

This feature is now in production on the regression machines. If you're only
touching files in .testignore, the centos-regression run will only take
1 minute. When you're adding a new script to the codebase that does not affect
gluster itself, please consider adding those files to .testignore

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-22 Thread Atin Mukherjee
I have highlighted about this failure earlier at [1]

[1] http://lists.gluster.org/pipermail/gluster-devel/2017-June/053042.html

On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi
>
> ./tests/encryption/crypt.t fails regression on
> https://build.gluster.org/job/centos6-regression/5112/consoleFull
> with a core. It doesn't seem to be related to the patch. Can somebody take
> a look at it? Following is the backtrace.
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
> object=0x7effb000ac28) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/atom.c:96
> 96
> /home/jenkins/root/workspace/centos6-regression/xlators/encryption/crypt/src/atom.c:
> No such file or directory.
> [Current thread is 1 (LWP 1082)]
> (gdb) bt
> #0  0x7effbe9ef92b in offset_at_tail (conf=0xc0,
> object=0x7effb000ac28) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/atom.c:96
> #1  0x7effbe9ef9d5 in offset_at_data_tail (frame=0x7effa4001960,
> object=0x7effb000ac28) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/atom.c:110
> #2  0x7effbe9f0729 in rmw_partial_block (frame=0x7effa4001960,
> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
> count=0, stbuf=0x7effb402da18,
> iobref=0x7effb804a0d0, atom=0x7effbec106a0 ) at
> /home/jenkins/root/workspace/centos6-regression/xlators/
> encryption/crypt/src/atom.c:523
> #3  0x7effbe9f1339 in rmw_data_tail (frame=0x7effa4001960,
> cookie=0x7effb4010050, this=0x7effb800b870, op_ret=0, op_errno=2, vec=0x0,
> count=0, stbuf=0x7effb402da18,
> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/atom.c:716
> #4  0x7effbea03684 in __crypt_readv_done (frame=0x7effb4010050,
> cookie=0x0, this=0x7effb800b870, op_ret=0, op_errno=0, xdata=0x0)
> at /home/jenkins/root/workspace/centos6-regression/xlators/
> encryption/crypt/src/crypt.c:3460
> #5  0x7effbea0375f in crypt_readv_done (frame=0x7effb4010050,
> this=0x7effb800b870) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/crypt.c:3487
> #6  0x7effbea03b25 in put_one_call_readv (frame=0x7effb4010050,
> this=0x7effb800b870) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/crypt.c:3514
> #7  0x7effbe9f286e in crypt_readv_cbk (frame=0x7effb4010050,
> cookie=0x7effb4010160, this=0x7effb800b870, op_ret=0, op_errno=2,
> vec=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/
> centos6-regression/xlators/encryption/crypt/src/crypt.c:371
> #8  0x7effbec9cb4b in dht_readv_cbk (frame=0x7effb4010160,
> cookie=0x7effb400ff40, this=0x7effb800a200, op_ret=0, op_errno=2,
> vector=0x7effbfb33880, count=1, stbuf=0x7effbfb33810,
> iobref=0x7effb804a0d0, xdata=0x0) at /home/jenkins/root/workspace/
> centos6-regression/xlators/cluster/dht/src/dht-inode-read.c:479
> #9  0x7effbeefff83 in client3_3_readv_cbk (req=0x7effb40048b0,
> iov=0x7effb40048f0, count=2, myframe=0x7effb400ff40)
> at /home/jenkins/root/workspace/centos6-regression/xlators/
> protocol/client/src/client-rpc-fops.c:2997
> #10 0x7effcc7b681e in rpc_clnt_handle_reply (clnt=0x7effb803eb70,
> pollin=0x7effb807b350) at /home/jenkins/root/workspace/
> centos6-regression/rpc/rpc-lib/src/rpc-clnt.c:793
> #11 0x7effcc7b6de8 in rpc_clnt_notify (trans=0x7effb803ed10,
> mydata=0x7effb803eba0, event=RPC_TRANSPORT_MSG_RECEIVED,
> data=0x7effb807b350)
> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
> lib/src/rpc-clnt.c:986
> #12 0x7effcc7b2e0c in rpc_transport_notify (this=0x7effb803ed10,
> event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7effb807b350)
> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
> lib/src/rpc-transport.c:538
> #13 0x7effc136458a in socket_event_poll_in (this=0x7effb803ed10,
> notify_handled=_gf_true) at /home/jenkins/root/workspace/
> centos6-regression/rpc/rpc-transport/socket/src/socket.c:2315
> #14 0x7effc1364bd5 in socket_event_handler (fd=10, idx=2, gen=1,
> data=0x7effb803ed10, poll_in=1, poll_out=0, poll_err=0)
> at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
> transport/socket/src/socket.c:2467
> #15 0x7effcca6216e in event_dispatch_epoll_handler
> (event_pool=0x2105fc0, event=0x7effbfb33e70) at
> /home/jenkins/root/workspace/centos6-regression/
> libglusterfs/src/event-epoll.c:572
> #16 0x7effcca62470 in event_dispatch_epoll_worker (data=0x215d950) at
> /home/jenkins/root/workspace/centos6-regression/
> libglusterfs/src/event-epoll.c:648
> #17 0x7effcbcc9aa1 in start_thread () from ./lib64/libpthread.so.0
> #18 0x7effcb631bcd in clone () from ./lib64/libc.so.6
>
>
> Thanks,
> Kotresh H R
>
> 

[Gluster-devel] some gluster-block fixes

2017-06-22 Thread Michael Adam
Hi all,

I have created a few patches to gluster-block.
But am  a little bit at a loss how to create
gerrit review requests. Hence I have created
github PRs.

https://github.com/gluster/gluster-block/pull/29
https://github.com/gluster/gluster-block/pull/30

Prasanna, I hope you can convert those to gerrit
again... :-D

Thanks - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2017-06-22-23764878 (master branch)

2017-06-22 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-06-22-23764878
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] BUG: Code changes in EC as part of Brick Multiplexing

2017-06-22 Thread Ashish Pandey

Hi, 

There are some code changes in EC which is impacting response time of gluster v 
heal info 
I have sent following patch to initiate the discussion on this and to 
understand why this code change was done. 
https://review.gluster.org/#/c/17606/1 

 
ec: Increase notification in all the cases 

Problem: 
"gluster v heal  info" is taking 
long time to respond when a brick is down. 

RCA: 
Heal info command does virtual mount. 
EC wait for 10 seconds, before sending UP call to upper xlator, 
to get notification (DOWN or UP) from all the bricks. 

Currently, we are increasing ec->xl_notify_count based on 
the current status of the brick. So, if a DOWN event notification 
has come and brick is already down, we are not increasing 
ec->xl_notify_count in ec_handle_down. 

Solution: 
Handle DOWN even as notification irrespective of what 
is the current status of brick. 

 
Code change was done by https://review.gluster.org/#/c/14763/ 

Ashish 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Scripts to help RCA quota accounting issues

2017-06-22 Thread Sanoj Unnikrishnan
I have written some scripts that may help RCA quota accounting related
issue in future.
Please use them when necessary.

1) Below script will compare accounting done by 'du' with that done by
quota and compare them.
Requires input : mountpoint and volname.
ouput: /tmp/gluster_files.tar

cd 
du -h | head -n -1 | tr -d '.' |awk  '{ for (i = 2; i <= NF; i++) {
printf("%s ", $i);}  print "" }' > /tmp/gluster_1
cat /tmp/gluster_1 | sed 's/ $//' | sed 's/ /\\ /g' | sed 's/(/\\(/g'
| sed 's/)/\\)/g' |xargs gluster v quota  list >
/tmp/gluster_2
du -h | head -n -1 |awk  '{ for (i = 2; i <= NF; i++) { printf("%s
%s", $i, $1);}  print "" }' | tr -d '.' >  /tmp/gluster_3
cat /tmp/gluster_2 /tmp/gluster_3 | sort > /tmp/gluster_4
find . -type d > /tmp/gluster_5
tar -cvf /tmp/gluster_files.tar /tmp/gluster_*


2)  To recusively get the quota xattr on a FS tree use:
https://gist.github.com/sanoj-unnikrishnan/740b177cbe9c3277123cf08d875a6bf8

Regards,
Sanoj
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gerrit outage on Jun 26

2017-06-22 Thread Nigel Babu
Hello folks,

We'll have a 1-hour long Gerrit outage on Monday for a Gerrit upgrade. We'll be
moving from 2.12.2 to 2.12.7. It's a minor version upgrade and I don't expect
any troubles. You can test the new version on gerrit-stage.rht.gluster.org

Date: 21 June 2017
Time: 0230 UTC (2230 EDT / 0430 CEST / 0800 IST)
Duration: 2h

If you have questions or concerns, please reach out to me. If you can spare
some time testing the staging instance, I'd be grateful too.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel