Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-07 Thread Raghavendra Gowdappa
+rafi. Rafi, can you have an initial analysis on this? regards, Raghavendra. - Original Message - > From: "Milind Changire" > To: gluster-devel@gluster.org > Sent: Tuesday, March 8, 2016 12:53:27 PM > Subject: [Gluster-devel] [master] FAILED: >

[Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-07 Thread Milind Changire
== Running tests in file ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t [07:27:48] ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t .. not ok 11 Got "1" instead of "0" not ok 14 Got "1"

Re: [Gluster-devel] [ANNOUNCE] Maintainer Update

2016-03-07 Thread Kotresh Hiremath Ravishankar
Congrats and all the best Aravinda! Thanks and Regards, Kotresh H R - Original Message - > From: "Venky Shankar" > To: "Gluster Devel" > Cc: maintain...@gluster.org > Sent: Tuesday, March 8, 2016 10:49:46 AM > Subject: [Gluster-devel]

[Gluster-devel] CentOS Regression generated core by .tests/basic/tier/tier-file-create.t

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All, The regression run has caused the core to generate for below patch. https://build.gluster.org/job/rackspace-regression-2GB-triggered/18859/console >From the initial analysis, it's a tiered setup where ec sub-volume is the cold >tier and afr is the hot tier. The crash has happened

[Gluster-devel] [ANNOUNCE] Maintainer Update

2016-03-07 Thread Venky Shankar
Hey folks, As of yesterday, Aravinda has taken over the maintainership of Geo-replication. Over the past year or so, he has been actively involved in it's development - introducing new features, fixing bugs, reviewing patches and helping out the community. Needless to say, he's the go-to guy

Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-07 Thread Krutika Dhananjay
It has been failing rather frequently. Have reported a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1315560 For now, have moved it to bad tests here: http://review.gluster.org/#/c/13632/1 -Krutika On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay wrote: > +Pranith >

Re: [Gluster-devel] Regression: Bitrot core generated by distribute/bug-1117851.t

2016-03-07 Thread FNU Raghavendra Manjunath
Hi, I have raised a bug for it ( https://bugzilla.redhat.com/show_bug.cgi?id=1315465). A patch has been sent for review (http://review.gluster.org/#/c/13628/). Regards, Raghavendra On Mon, Mar 7, 2016 at 11:04 AM, Poornima Gurusiddaiah wrote: > Hi, > > I see a bitrot

Re: [Gluster-devel] Query on healing process

2016-03-07 Thread ABHISHEK PALIWAL
On Fri, Mar 4, 2016 at 5:31 PM, Ravishankar N wrote: > On 03/04/2016 12:10 PM, ABHISHEK PALIWAL wrote: > > Hi Ravi, > > 3. On the rebooted node, do you have ssl enabled by any chance? There is a > bug for "Not able to fetch volfile' when ssl is enabled: >

Re: [Gluster-devel] Default quorum for 2 way replication

2016-03-07 Thread Shyam
On 03/05/2016 05:26 AM, Pranith Kumar Karampuri wrote: That is the point. There is an illusion of choice between Data integrity and HA. But we are not *really* giving HA, are we? HA will be there only if second brick in the replica pair goes down. In your typical @Pranith, can you elaborate on

[Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All, Here is the idea, we can use geo-replication as backup solution using gluster volume snapshots on slave side. One of the drawbacks of geo-replication is that it's a continuous asynchronous replication and would not help in getting the last week's or yesterday's data. So if we use