Re: [Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Atin Mukherjee
Mohit is aware of this issue and currently working on a patch. On Mon, Mar 12, 2018 at 9:47 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, > In https://build.gluster.org/job/centos7-regression/274/consoleFull, > the test in $SUBJECT generated core. It seems to be

Re: [Gluster-devel] [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Atin Mukherjee
On Thu, 8 Mar 2018 at 11:43, Kaushal M wrote: > On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi > wrote: > > Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST) > > > > BJ Link > > > > Bridge: https://bluejeans.com/205933580 > >

Re: [Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-03-07 Thread Atin Mukherjee
On Wed, Mar 7, 2018 at 7:38 AM, Varsha Rao <va...@redhat.com> wrote: > Hello Atin, > > On Tue, Mar 6, 2018 at 10:23 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > > Looks like the failure is back again. Refer > > https://build.gluster.org/job/regres

Re: [Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-03-06 Thread Atin Mukherjee
Looks like the failure is back again. Refer https://build.gluster.org/job/regression-test-with-multiplex/663/console and this has been failing in other occurrences too. On Mon, Feb 26, 2018 at 2:58 PM, Varsha Rao <va...@redhat.com> wrote: > Hi Atin, > > On Mon, Feb 26, 2018 at

Re: [Gluster-devel] two potential memory leak place found on glusterfs 3.12.3

2018-02-26 Thread Atin Mukherjee
+Gaurav On Mon, Feb 26, 2018 at 2:02 PM, Raghavendra Gowdappa wrote: > +glusterd devs > > On Mon, Feb 26, 2018 at 1:41 PM, Storage, Dev (Nokia - Global) < > dev.stor...@nokia.com> wrote: > >> Hi glusterfs experts, >> >>Good day! >> >>During our recent test

[Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-02-25 Thread Atin Mukherjee
Hi Varsha, Thanks for your first feature "namespace" in GlusterFS! As we run a periodic regression jobs with brick multiplexing, we have seen that tests/basic/namespace.t fails constantly with brick multiplexing enabled. I just went through the function check_samples () in the test file and it

Re: [Gluster-devel] gNFS service management from glusterd

2018-02-21 Thread Atin Mukherjee
On Wed, Feb 21, 2018 at 4:24 PM, Xavi Hernandez wrote: > Hi all, > > currently glusterd sends a SIGKILL to stop gNFS, while all other services > are stopped with a SIGTERM signal first (this can be seen in > glusterd_svc_stop() function of mgmt/glusterd xlator). > > The

[Gluster-devel] tests/bugs/rpc/bug-921072.t - fails almost all the times in mainline

2018-02-20 Thread Atin Mukherjee
*https://build.gluster.org/job/centos7-regression/15/consoleFull 20:24:36* [20:24:39] Running tests in file ./tests/bugs/rpc/bug-921072.t*20:27:56* ./tests/bugs/rpc/bug-921072.t timed out after 200 seconds*20:27:56*

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.10.11: Planned for the 28th of Feb, 2018

2018-02-19 Thread Atin Mukherjee
On Tue, Feb 20, 2018 at 7:26 AM, Atin Mukherjee <amukh...@redhat.com> wrote: > > > On Mon, Feb 19, 2018 at 7:46 PM, Shyam Ranganathan <srang...@redhat.com> > wrote: > >> On 01/30/2018 02:14 PM, Shyam Ranganathan wrote: >> > Hi, >> > >>

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.10.11: Planned for the 28th of Feb, 2018

2018-02-19 Thread Atin Mukherjee
On Mon, Feb 19, 2018 at 7:46 PM, Shyam Ranganathan wrote: > On 01/30/2018 02:14 PM, Shyam Ranganathan wrote: > > Hi, > > > > As release 3.10.10 is tagged and off to packaging, here are the needed > > details for 3.10.11 > > > > Release date: 28th Feb, 2018 > > Checking the

Re: [Gluster-devel] Jenkins Issues this weekend and how we're solving them

2018-02-18 Thread Atin Mukherjee
On Mon, Feb 19, 2018 at 8:53 AM, Nigel Babu wrote: > Hello, > > As you all most likely know, we store the tarball of the binaries and core > if there's a core during regression. Occasionally, we've introduced a bug > in Gluster and this tar can take up a lot of space. This has

Re: [Gluster-devel] IMP: upgrade issue

2018-02-12 Thread Atin Mukherjee
On Tue, Feb 13, 2018 at 11:12 AM, Shyam Ranganathan <srang...@redhat.com> wrote: > On 02/13/2018 12:35 AM, Atin Mukherjee wrote: > > > > > > On Tue, Feb 13, 2018 at 10:43 AM, Jiffin Tony Thottan > > <jthot...@redhat.com <mailto:jthot...@redhat.co

Re: [Gluster-devel] IMP: upgrade issue

2018-02-12 Thread Atin Mukherjee
If any one upgrades the cluster from < 3.10 to >= 3.10, it's a genuine problem as per my code reading. > -- > > Jiffin > > > > On Tuesday 13 February 2018 08:21 AM, Hari Gowtham wrote: > >> I'm working on it. >> >> On Tue, Feb 13, 2018 at 8:11 AM

[Gluster-devel] IMP: upgrade issue

2018-02-12 Thread Atin Mukherjee
FYI.. We need to backport https://review.gluster.org/#/c/19552 (yet to be merged in mainline) in all the active release branches to avoid users to get into upgrade failures. The bug and the commit has the further details. ___ Gluster-devel mailing list

Re: [Gluster-devel] Reducing regression time of glusterd test cases

2018-02-10 Thread Atin Mukherjee
This is completed and the changes are in mainline. We have managed to bring down the regression time for glusterd tests by more than half. A big shout to Sanju and Gaurav for taking this to completion. On Thu, 4 Jan 2018 at 09:57, Sanju Rakonde wrote: > Hi all, > > You can

[Gluster-devel] regression test failure in mainline

2018-02-09 Thread Atin Mukherjee
FYI..One of my patch in mainline has broken one regression test tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t and the fix for the same has been posted at https://review.gluster.org/#/c/19536 . Would request some one to review this change and merge

Re: [Gluster-devel] sometimes when reboot node glustershd process does not come up.

2018-02-07 Thread Atin Mukherjee
Apologies for coming back late on this as I was occupied with some other priority stuffs. I think your analysis is spot on. To add on why we hit this race, when glusterd restarts, glusterd_svcs_manager () is invoked from glusterd_compare_friend_data () and through glusterd_restart_bricks () where

Re: [Gluster-devel] https://review.gluster.org/#/c/18893

2018-02-05 Thread Atin Mukherjee
Backported, review request : https://review.gluster.org/#/c/19501 On Mon, Feb 5, 2018 at 11:59 AM, Nithya Balachandran wrote: > Hi, > > It looks like this has not been ported to release-3.12 branch. Can the > author please do so? > > Thanks, > Nithya > >

Re: [Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Atin Mukherjee
I don't think that's the right way. Ideally the test shouldn't be attempting to stop a volume if rebalance session is in progress. If we do see such a situation even with we check for rebalance status and wait till it finishes for 30 secs and still volume stop fails with rebalance session in

Re: [Gluster-devel] [Gluster-users] Status of the patch!!!

2018-01-31 Thread Atin Mukherjee
I have repeatedly explained this multiple times the way to hit this problem is *extremely rare* and until and unless you prove us wrong and explain why do you think you can get into this situation often. I still see that information is not being made available to us to think through why this fix

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Branched

2018-01-26 Thread Atin Mukherjee
On Fri, Jan 26, 2018 at 5:11 PM, Raghavendra G wrote: > > > On Fri, Jan 26, 2018 at 4:49 PM, Raghavendra Gowdappa > wrote: > >> >> >> - Original Message - >> > From: "Shyam Ranganathan" >> > To: "Gluster Devel"

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Branched

2018-01-25 Thread Atin Mukherjee
Shyam, We need to have 4.0 version created in bugzilla for GlusterFS which is currently missing. I have a patch to backport into this branch. On Wed, Jan 24, 2018 at 1:47 AM, Shyam Ranganathan wrote: > 4.0 release has been branched! > > I will follow this up with a more

Re: [Gluster-devel] Infra-related Regression Failures and What We're Doing

2018-01-23 Thread Atin Mukherjee
Both the tests are now marked as bad since there's has been more than one instance where these tests have failed even after fixing the infra problem. Request geo-rep team to take a look at and revive the tests back soon. On Tue, Jan 23, 2018 at 2:30 PM, Atin Mukherjee <amukh...@redhat.com>

Re: [Gluster-devel] Infra-related Regression Failures and What We're Doing

2018-01-23 Thread Atin Mukherjee
On Mon, Jan 22, 2018 at 5:13 PM, Nigel Babu wrote: > Update: All the nodes that had problems with geo-rep are now fixed. > Waiting on the patch to be merged before we switch over to Centos 7. If > things go well, we'll replace nodes one by one as soon as we have one green > on

Re: [Gluster-devel] GD 2 xlator option changes

2018-01-15 Thread Atin Mukherjee
On Mon, 15 Jan 2018 at 12:15, Nithya Balachandran wrote: > Hi, > > A few questions about this: > > 1. What (if anything) should be done for options like these which have "!" > ? > > /* Switch xlator options (Distribute special case) */ > > { .key=

Re: [Gluster-devel] Recent regression failures

2018-01-12 Thread Atin Mukherjee
On Thu, Jan 11, 2018 at 10:15 AM, Nigel Babu wrote: > Hello folks, > > We may have been a little too quick to blame Meltdown on the Jenkins > failures yesterday. In any case, we've open a ticket with our provider and > they're looking into the failures. I've looked at the last

Re: [Gluster-devel] Revert of 56e5fdae (SSL change) - why?

2018-01-07 Thread Atin Mukherjee
t;j...@julianfamily.org> wrote: > The point is, I believe, that one shouldn't have to go digging through > external resources to find out why a commit exists. Please ensure the > commit message has adequate accurate information. > > On 01/07/2018 07:11 PM, Atin Mukherjee wrote: >

Re: [Gluster-devel] Revert of 56e5fdae (SSL change) - why?

2018-01-07 Thread Atin Mukherjee
Also please refer http://lists.gluster.org/pipermail/gluster-devel/2017-December/054103.html . Some of the tests like ssl-cipher.t, trash.t were failing frequently in brick multiplexing enabled regression jobs. When I reverted this patch, I couldn't reproduce any of those test failures. On Mon,

[Gluster-devel] upstream master is broken

2018-01-06 Thread Atin Mukherjee
Commit 515a832 changed the signature of VALIDATE_DATA_AND_LOG macro where as a different patch (commit 9243059) introducing a new function which uses this macro was sent earlier. So 515a832 got merged followed by 9243059 (with out a rebase). https://review.gluster.org/#/c/19155/ fixes it. We need

Re: [Gluster-devel] [Gluster-users] 2018 - Plans and Expectations on Gluster Community

2018-01-02 Thread Atin Mukherjee
On Tue, Jan 2, 2018 at 2:36 PM, Hetz Ben Hamo wrote: > Hi Amar, > > If can say something about the development of GlusterFS - is that there > are 2 missing things: > > 1. Breakage between releases. I'm "stuck" using GlusterFS 3.8 because > someone support to enable NFS-Ganesha.

Re: [Gluster-devel] trash.t failure with brick multiplexing [Was Re: Build failed in Jenkins: regression-test-with-multiplex #574]

2018-01-01 Thread Atin Mukherjee
On Thu, Dec 21, 2017 at 7:27 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > > On Wed, Dec 20, 2017 at 11:58 AM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> ./tests/bugs/glusterd/bug-1230121-replica_subvol_count_correct_cal.t >> > > Unfortu

[Gluster-devel] trash.t failure with brick multiplexing [Was Re: Build failed in Jenkins: regression-test-with-multiplex #574]

2017-12-21 Thread Atin Mukherjee
On Wed, Dec 20, 2017 at 11:58 AM, Atin Mukherjee <amukh...@redhat.com> wrote: > ./tests/bugs/glusterd/bug-1230121-replica_subvol_count_correct_cal.t > Unfortunately the above is passing in my setup. I'll be checking the logs to see if I can figure out the issue. ./tests/feat

[Gluster-devel] Fwd: Build failed in Jenkins: regression-test-with-multiplex #574

2017-12-19 Thread Atin Mukherjee
./tests/bugs/glusterd/bug-1230121-replica_subvol_count_correct_cal.t ./tests/features/trash.t The above two are new failures since day before yesterday. Job link is at https://build.gluster.org/job/regression-test-with-multiplex/574/consoleFull . -- Forwarded message -- From:

[Gluster-devel] REMINDER: Updating xlator options for GD2 (Was Re: Call for help: Updating xlator options for GD2 (round one))

2017-12-19 Thread Atin Mukherjee
Kaushal had written up a xlator-checker tool in GD2 to determine what all xlators are still missing the alignment of the new xlator_options structure which is a strict dependency of GD2 if it has to work with glusterfs master for all the options to be compatible with GD2 this is a mandatory work

Re: [Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-15 Thread Atin Mukherjee
s bd >> xlator, which adds some more options that make the help output to grow >> beyond the buffer size. >> >> I'll send a patch to fix the problem. >> >> Xavi >> >> On Fri, Dec 15, 2017 at 10:05 AM, Xavi Hernandez <jaher...@redhat.com> >> wrote

Re: [Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-15 Thread Atin Mukherjee
But why doesn't it crash every time if this is the RCA? None of us could actually reproduce it locally. On Fri, Dec 15, 2017 at 2:23 PM, Xavi Hernandez wrote: > I've seen this failure in one of my local tests and I've done a quick > analysis: > > (gdb) bt > #0

Re: [Gluster-devel] [Gluster-Maintainers] Maintainers meeting Agenda: Dec 13th

2017-12-12 Thread Atin Mukherjee
On Tue, Dec 12, 2017 at 5:15 PM, Amar Tumballi wrote: > This is going to be a longer meeting if we want to discuss everything > here, so please consider going through this before and add your points > (with name) in the meeting notes. See you all tomorrow. > > Meeting date:

Re: [Gluster-devel] Crash in glusterd!!!

2017-12-06 Thread Atin Mukherjee
> pollin = 0x3fff6c000920 >> >> priv = 0x3fff74002d50 >> >> #14 0x3fff847ff89c in socket_event_handler (fd=, >> idx=, data=0x3fff74002210, poll_in=, >> poll_out=, poll_err=) at socket.c:2349 >> >> ---Type to continue, or q to quit--- >>

Re: [Gluster-devel] Crash in glusterd!!!

2017-12-06 Thread Atin Mukherjee
Without the glusterd log file and the core file or the backtrace I can't comment anything. On Wed, Dec 6, 2017 at 3:09 PM, ABHISHEK PALIWAL wrote: > Any suggestion > > On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" wrote: > >> Hi Team, >> >> We

Re: [Gluster-devel] regression tests taking time

2017-11-29 Thread Atin Mukherjee
The timeout of the entire job used to be 5 hours and now it has been increased to 6 hours (IIRC). There’s no soecific per test timeout. On Wed, 29 Nov 2017 at 17:05, Raghavendra G wrote: > Isn't there a timeout after which test is aborted? AFAIR its 300 seconds. > > On

Re: [Gluster-devel] Tests failing on Centos 7

2017-11-27 Thread Atin Mukherjee
On Tue, Nov 28, 2017 at 11:35 AM, Ravishankar N wrote: > > > On 11/27/2017 07:12 PM, Nigel Babu wrote: > > Hello folks, > > I have an update on chunking. There's good news and bad. The first bit is > that We a chunked regression job now. It splits it out into 10 chunks

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: Release notes (Please read and contribute)

2017-11-27 Thread Atin Mukherjee
On Tue, Nov 21, 2017 at 1:41 AM, Shyam Ranganathan wrote: > Hi, > > 3.13 RC0 is around the corner (possibly tomorrow). Towards this and the > final 3.13.0 release, I was compiling the features that are a part of 3.13 > and also attempted to write out the release notes for

[Gluster-devel] Brick multiplexing test failures

2017-11-17 Thread Atin Mukherjee
./tests/basic/tier/new-tier-cmds.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/bugs/core/bug-1432542-mpx-restart-crash.t The above tests have started failing frequently since last few weeks in mainline. I believe some recent change(s) have attributed to the failures. While I am aware

Re: [Gluster-devel] Slow volume, gluster volume status bug

2017-11-14 Thread Atin Mukherjee
On Tue, Nov 14, 2017 at 2:47 PM, Emmanuel Dreyfus <m...@netbsd.org> wrote: > On Tue, Nov 14, 2017 at 12:17:05PM +0530, Atin Mukherjee wrote: > > > gluster volume status also exhibits trouble: each server will only > > > list its bricks, but not the other's o

Re: [Gluster-devel] Slow volume, gluster volume status bug

2017-11-13 Thread Atin Mukherjee
On Mon, Nov 13, 2017 at 10:10 PM, Emmanuel Dreyfus wrote: > Hello > > I am looking for hints about how to debug this: > > I have a 4x2 Distributed-Replicate volume which exhibits extremely slow > operations. Example: > # time stat /gfs/dl > 51969 10143657874486987692 drwxr-xr-x

[Gluster-devel] Fwd: Build failed in Jenkins: regression-test-with-multiplex #520

2017-11-12 Thread Atin Mukherjee
./tests/basic/tier/new-tier-cmds.t is failing very frquently with brick-mux enabled. Can someone from tiering team take a look at it please? -- Forwarded message - From: Date: Sun, 12 Nov 2017 at 22:38 Subject: Build failed in Jenkins:

Re: [Gluster-devel] Release 3.12.3 : Scheduled for the 10th of November

2017-11-10 Thread Atin Mukherjee
On Fri, Nov 10, 2017 at 6:21 PM, Niels de Vos wrote: > On Fri, Nov 10, 2017 at 11:23:51AM +0530, Jiffin Tony Thottan wrote: > > Hi, > > > > I am planning to do 3.12.3 release today 10:00 pm IST (4:30 pm GMT). > > > > Following bugs is removed from tracker list > > > > Bug

Re: [Gluster-devel] master broken.

2017-11-07 Thread Atin Mukherjee
Please pull in the latest changes. It’s fixed now. On Tue, 7 Nov 2017 at 23:41, Hari Gowtham wrote: > Hi, > > I have been trying to install the rebase and i see this error > > > server.c: In function ‘init’: > server.c:1205:15: error: too few arguments to function >

Re: [Gluster-devel] Regression failure: ./tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t

2017-11-06 Thread Atin Mukherjee
On Mon, 6 Nov 2017 at 18:26, Nithya Balachandran <nbala...@redhat.com> wrote: > On 6 November 2017 at 18:02, Atin Mukherjee <amukh...@redhat.com> wrote: > >> Snippet from where the test failed (the one which failed is marked in >> bold): >> >> # Start

Re: [Gluster-devel] Regression failure: ./tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t

2017-11-06 Thread Atin Mukherjee
Snippet from where the test failed (the one which failed is marked in bold): # Start the volume TEST $CLI_1 volume start $V0 EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status_1 $V0 $H1 $B1/${V0}1 EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status_1 $V0 $H2 $B2/${V0}2 EXPECT_WITHIN

Re: [Gluster-devel] Coverity fixes

2017-11-03 Thread Atin Mukherjee
On Fri, 3 Nov 2017 at 18:31, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote: > On 11/02/2017 10:19 AM, Atin Mukherjee wrote: > > While I appreciate the folks to contribute lot of coverity fixes over > > last few days, I have an observation for some of the patches the &g

Re: [Gluster-devel] String manipulation

2017-11-02 Thread Atin Mukherjee
Missed to click "reply all" earlier :) On Thu, Nov 2, 2017 at 9:34 PM, Xavi Hernandez <jaher...@redhat.com> wrote: > Hi Atin, > > On 2 November 2017 at 16:31, Atin Mukherjee <atin.mukherje...@gmail.com> > wrote: > >> >> >> On Thu, Nov

[Gluster-devel] Coverity fixes

2017-11-02 Thread Atin Mukherjee
While I appreciate the folks to contribute lot of coverity fixes over last few days, I have an observation for some of the patches the coverity issue id(s) are *not* mentioned which gets maintainers in a difficult situation to understand the exact complaint coming out of the coverity. From my past

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: (STM release) Details

2017-10-31 Thread Atin Mukherjee
On Tue, 31 Oct 2017 at 18:34, Shyam Ranganathan wrote: > On 10/31/2017 08:11 AM, Karthik Subrahmanya wrote: > > Hey Shyam, > > > > Can we also have the heal info summary feature [1], which is merged > > upstream [2]. > > I haven't raised an issue for this yet, I can do that

Re: [Gluster-devel] [Gluster-Maintainers] Client Server incompatibility with Gluster 4.0

2017-10-13 Thread Atin Mukherjee
On Thu, 12 Oct 2017 at 20:45, Shyam Ranganathan wrote: > On 10/12/2017 11:03 AM, Vijay Bellur wrote: > > Further, rolling upgrades of the server becomes a moot point, as > > some would be a higher version in the interim and hence prevent > > existing clients to

[Gluster-devel] r.g.o - invalid certificate

2017-10-08 Thread Atin Mukherjee
r.g.o is inaccessible to me with following error: review.gluster.org uses an invalid security certificate. The certificate expired on October 8, 2017, 5:17 PM. The current time is October 8, 2017, 7:38 PM. Error code: SEC_ERROR_EXPIRED_CERTIFICATE Bug filed at

Re: [Gluster-devel] Gluster builder being hit by too much process

2017-10-06 Thread Atin Mukherjee
On Fri, 6 Oct 2017 at 19:05, Michael Scherer wrote: > Le vendredi 06 octobre 2017 à 16:53 +0530, Gaurav Yadav a écrit : > > As gluster cli was failing to create a volume which has tons of brick > > request in one command. > > I added this

[Gluster-devel] brick multiplexing regression is broken

2017-10-05 Thread Atin Mukherjee
The following commit has broken the brick multiplexing regression job. tests/bugs/bug-1371806_1.t has failed couple of times. One of the latest regression job report is at https://build.gluster.org/job/regression-test-with-multiplex/406/console . commit 9b4de61a136b8e5ba7bf0e48690cdb1292d0dee8

Re: [Gluster-devel] [Gluster-users] [Gluster-infra] lists.gluster.org issues this weekend

2017-09-22 Thread Atin Mukherjee
On Fri, 22 Sep 2017 at 18:54, Ravishankar N wrote: > Hello, > Are our servers still facing the overload issue? My replies to > gluster-users ML are not getting delivered to the list. > Same here. Even this is true for gluster-devel as well. > Regards, > Ravi > > > On

Re: [Gluster-devel] netbsd-periodic failing on master

2017-09-11 Thread Atin Mukherjee
On Mon, Sep 11, 2017 at 4:42 PM, Hari Gowtham wrote: > Hi, > > I took a look at the issue. The new command "gluster volume status > client-list" > Has an issue with brick multiplexing. > > The way this command aggregates the values and the way the brick > multiplexing

[Gluster-devel] tests/basic/pump.t - what is it used for?

2017-09-07 Thread Atin Mukherjee
Pranith, I see you're the author of the test in $Subj. Now while I was working on a patch https://review.gluster.org/#/c/18226/ to disallow replace brick operations on dist only volumes the patch failed the regression on this test as the test actually uses replace brick on a distribute only

Re: [Gluster-devel] [Gluster-Maintainers] Changing Submit Type on review.gluster.org

2017-09-07 Thread Atin Mukherjee
On Thu, Sep 7, 2017 at 11:50 AM, Nigel Babu wrote: > Hello folks, > > A few times, we've merged dependent patches out of order because the Submit > type[1] did not block us from doing so. The last few times we've talked > about > this, we didn't actually take a strong decision

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-30 Thread Atin Mukherjee
If that makes sense i will make those changes > as well along with introducing the delay b/w 'start' and 'status' > > On Wed, Aug 30, 2017 at 4:26 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> >> >> On Wed, Aug 30, 2017 at 4:23 PM, Shwetha Pandura

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-30 Thread Atin Mukherjee
. We can't assume rebalance status to report back success immediately after rebalance start and I've explained the why part in the earlier thread. Why do we need to do an intermediate check of rebalance status before going for wait_for_rebalance_to_complete ? > On Wed, Aug 30, 2017 at 4:07 PM, Atin

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-30 Thread Atin Mukherjee
status failure was logged at 15:13:58,994 where as rebalance start was triggered at 15:13:58,952. @Shwetha - could you help me in understanding how do we log the rebalance status ret code in glusto log? On Wed, Aug 30, 2017 at 4:07 PM, Atin Mukherjee <amukh...@redhat.com> wrote: >

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-29 Thread Atin Mukherjee
t rebalance status was executed multiple times till it succeed? If yes then the test shouldn't have failed. Can I get to access the complete set of logs? > -Shwetha > > On Tue, Aug 29, 2017 at 7:04 PM, Shyam Ranganathan <srang...@redhat.com> > wrote: > >> On 08/29/2017 09:31 AM

Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-29 Thread Atin Mukherjee
On Tue, Aug 29, 2017 at 4:13 AM, Shyam Ranganathan wrote: > Nigel, Shwetha, > > The latest Glusto run [a] that was started by Nigel, post fixing the prior > timeout issue, failed (much later though) again. > > I took a look at the logs and my analysis is here [b] > > @atin,

Re: [Gluster-devel] [Gluster-users] Gluster 4.0: Update

2017-08-24 Thread Atin Mukherjee
On Fri, 25 Aug 2017 at 06:11, Lindsay Mathieson wrote: > I did a quick google to see what Haolo Replication was - nice feature, > very useful. > > Unfortunately I also found this: > https://www.google.com/patents/US20160028806 > > >Halo based file system replication

Re: [Gluster-devel] [Gluster-users] Brick count limit in a volume

2017-08-22 Thread Atin Mukherjee
An upstream bug would be ideal as github issue is mainly used for enhancements. In the mean time, could you point to the exact failure shown at the command line and the log entry from cli.log? On Wed, Aug 23, 2017 at 12:10 AM, Serkan Çoban wrote: > Hi, I think this is the

Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-02 Thread Atin Mukherjee
On Wed, 2 Aug 2017 at 18:41, Kaushal M wrote: > On Wed, Aug 2, 2017 at 5:03 PM, Prashanth Pai wrote: > > Hi all, > > > > The ongoing work on glusterd2 necessitates following non-breaking and > > non-exhaustive list of changes to glusterfs source code: > > >

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.12: Status of features (Require responses!)

2017-08-01 Thread Atin Mukherjee
On Mon, Jul 31, 2017 at 11:53 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > As part of get-state enhancements efforts, primary the requirements coming > from tendrl project, Samikshan is working on the patch get the geo-rep > session details included in it. This is the o

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.12: Status of features (Require responses!)

2017-07-31 Thread Atin Mukherjee
As part of get-state enhancements efforts, primary the requirements coming from tendrl project, Samikshan is working on the patch get the geo-rep session details included in it. This is the only patch which is pending atm. @Samikshan - can we please put the patch up for review by tomorrow so that

[Gluster-devel] tests/bugs/quota/bug-1292020.t getting hung

2017-07-26 Thread Atin Mukherjee
This is the 3rd instance where I saw this test was hung resulting into regression getting aborted. The latest victim is https://build.gluster.org/job/centos6-regression/5696/console . ~Atin ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #98

2017-07-20 Thread Atin Mukherjee
Netbsd runs don't go through. add-brick-self-heal.t seems to be generating core. -- Forwarded message - From: Date: Fri, 21 Jul 2017 at 06:20 Subject: [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #98 To: ,

Re: [Gluster-devel] [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #162

2017-07-17 Thread Atin Mukherjee
ck instances go down with kill signal. We need to use kill_brick utility here to ensure the test is compatible with brick multiplexing. On Sun, Jul 16, 2017 at 1:22 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > EC dev - tests/basic/ec/ec-1468261.t is failing frequently now with brick

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #162

2017-07-16 Thread Atin Mukherjee
EC dev - tests/basic/ec/ec-1468261.t is failing frequently now with brick multiplex enabled. Please have a look. -- Forwarded message - From: Date: Sun, 16 Jul 2017 at 06:32 Subject: [Gluster-Maintainers] Build failed in Jenkins:

Re: [Gluster-devel] upstream regression suite is broken

2017-07-07 Thread Atin Mukherjee
bly something in the code which has gone in to make this test fail now. So no way it's something the developer/maintainer has to take a blame for. > > > [1] - https://review.gluster.org/12209 > [2] - https://build.gluster.org/job/centos6-regression/4897/consoleFull > > -Krutika &

[Gluster-devel] upstream regression suite is broken

2017-07-06 Thread Atin Mukherjee
Krutika, tests/basis/stats-dump.t is failing all the time and as per my initial analysis after https://review.gluster.org/#/c/17709/ got into the mainline the failures are seen and reverting this patch makes the test to run successfully. I do understand that the centos vote for this patch was

[Gluster-devel] is tests/basic/gfapi/libgfapi-fini-hang.t broken in NetBSD ?

2017-07-06 Thread Atin Mukherjee
https://build.gluster.org/job/netbsd7-regression/4761/consoleFull ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] reagarding backport information while porting patches

2017-06-23 Thread Atin Mukherjee
On Fri, Jun 23, 2017 at 9:37 AM, Ravishankar N wrote: > On 06/23/2017 09:15 AM, Pranith Kumar Karampuri wrote: > > hi, > Now that we are doing backports with same Change-Id, we can find the > patches and their backports both online and in the tree without any extra >

Re: [Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-22 Thread Atin Mukherjee
I have highlighted about this failure earlier at [1] [1] http://lists.gluster.org/pipermail/gluster-devel/2017-June/053042.html On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi > > ./tests/encryption/crypt.t fails regression on >

Re: [Gluster-devel] Regression Voting Changes

2017-06-21 Thread Atin Mukherjee
On Tue, Jun 20, 2017 at 9:17 AM, Nigel Babu wrote: > Hello folks, > > Amar has proposed[1] these changes in the past and I'd like to announce us > going > live with them as we've not received any strong feedback against it. > > ## Centos Regression > * On master, we only run

Re: [Gluster-devel] need reviews

2017-06-21 Thread Atin Mukherjee
On Wed, Jun 21, 2017 at 4:18 PM, Amar Tumballi wrote: > > > On Mon, May 29, 2017 at 1:11 PM, Hari Gowtham wrote: > >> Hi, >> >> I would like to get reviews on the following patches. >> >> https://review.gluster.org/#/c/15740/5 >>

Re: [Gluster-devel] Build failed in Jenkins: regression-test-with-multiplex #60

2017-06-12 Thread Atin Mukherjee
00--0001) > resolution failed > > > > In this test, we are trying to kill a brick and starting it using command > line. > I think that is what actually failing. > In multiplexing, can we do it? Or is there some other way of doing the > same thing? > What's the reas

[Gluster-devel] Fwd: Build failed in Jenkins: regression-test-with-multiplex #60

2017-06-11 Thread Atin Mukherjee
https://review.gluster.org/#/c/16985/ has introduced a new test ec-data-heal.t which is now constantly failing with brick multiplexing. Can this be looked at? -- Forwarded message -- From: Date: Mon, Jun 12, 2017 at 6:33 AM Subject: Build failed in

Re: [Gluster-devel] Regression with brick multiplex turned on failing or aborted

2017-06-04 Thread Atin Mukherjee
On Sat, May 27, 2017 at 2:57 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > On Sat, 27 May 2017 at 09:55, Nigel Babu <nig...@redhat.com> wrote: > >> FYI: There are now emails to gluster-maintainers@ about this. If you'd >> like to help diagnose/fix the prob

Re: [Gluster-devel] [Gluster-Maintainers] Backport for "Add back socket for polling of events immediately..."

2017-05-28 Thread Atin Mukherjee
On Sun, May 28, 2017 at 1:48 PM, Niels de Vos wrote: > On Fri, May 26, 2017 at 12:25:42PM -0400, Shyam wrote: > > Or this one: https://review.gluster.org/15036 > > > > This is backported to 3.8/10 and 3.11 and considering the size and > impact of > > the change, I wanted to be

Re: [Gluster-devel] Regression with brick multiplex turned on failing or aborted

2017-05-27 Thread Atin Mukherjee
On Sat, 27 May 2017 at 09:55, Nigel Babu wrote: > FYI: There are now emails to gluster-maintainers@ about this. If you'd > like to help diagnose/fix the problems, please see the job and logs here: > https://build.gluster.org/job/regression-test-with-multiplex/ > Thank you

Re: [Gluster-devel] [master] [FAILED] ./tests/bugs/core/bug-1432542-mpx-restart-crash.t: 12 new core files

2017-05-22 Thread Atin Mukherjee
17 at 9:25 PM, Milind Changire <mchan...@redhat.com> wrote: > backtraces are available at the job page mentioned in original mail > > Milind > > On 05/22/2017 09:22 PM, Atin Mukherjee wrote: > >> Do you have the backtraces? >> >> On Mon, 22 May 2017 at 21:20

Re: [Gluster-devel] [master] [FAILED] ./tests/bugs/core/bug-1432542-mpx-restart-crash.t: 12 new core files

2017-05-22 Thread Atin Mukherjee
Do you have the backtraces? On Mon, 22 May 2017 at 21:20, Milind Changire wrote: > Job: https://build.gluster.org/job/centos6-regression/4731/console > > -- > Milind > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org >

Re: [Gluster-devel] tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - regression failures

2017-05-14 Thread Atin Mukherjee
hich is under progress. > So as discussed with Ravi, we were planning to mark it as bad at the > moment. Is that fine? > I'd suggest that and would request to mark it bad asap. It's been failing very frequently now. > Regards, > Karthik > > On Fri, May 12, 2017 at 3:33 PM, Atin Mukher

Re: [Gluster-devel] tests/bugs/gfapi/bug-1447266/bug-1447266.t

2017-05-13 Thread Atin Mukherjee
We'd need https://review.gluster.org/#/c/17177/ to be merged before this test starts working. Even though https://review.gluster.org/17177 was dependent on https://review.gluster.org/17216 , gerrit didn't disallow this patch to be merged. May I request Jeff/Vijay/Shyam to take a look at this patch

[Gluster-devel] tests/bugs/gfapi/bug-1447266/bug-1447266.t broken in master?

2017-05-12 Thread Atin Mukherjee
Looks like this test is broken in latest master. Every time I run it, 26 & 28th tests fail. One of the recent regression runs [1] failed because of it. [1] https://build.gluster.org/job/centos6-regression/4550/console ___ Gluster-devel mailing list

Re: [Gluster-devel] Empty info file preventing glusterd from starting

2017-05-09 Thread Atin Mukherjee
arts in a loop? > Regards, > Abhishek > > On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> >> >> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com >> > wrote: >> >>> + Muthu

Re: [Gluster-devel] Empty info file preventing glusterd from starting

2017-05-09 Thread Atin Mukherjee
On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL wrote: > + Muthu-vingeshwaran > > On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL > wrote: > >> Hi Atin/Team, >> >> We are using gluster-3.7.6 with setup of two brick and while restart of >>

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-03 Thread Atin Mukherjee
On Wed, May 3, 2017 at 3:41 PM, Raghavendra Talur wrote: > On Tue, May 2, 2017 at 8:46 PM, Nithya Balachandran > wrote: > > > > > > On 2 May 2017 at 16:59, Shyam wrote: > >> > >> Talur, > >> > >> Please wait for this fix before

Re: [Gluster-devel] Tests that fail with multiplexing turned on

2017-05-02 Thread Atin Mukherjee
On Tue, May 2, 2017 at 2:36 AM, Jeff Darcy wrote: > Since the vast majority of our tests run without multiplexing, I'm going > to start running regular runs of all tests with multiplexing turned on. > You can see the patch here: > > https://review.gluster.org/#/c/17145/ > >

[Gluster-devel] regression in master

2017-05-02 Thread Atin Mukherjee
With latest HEAD, all volume set operation fails for a volume which is in STARTED state. I've figured out that commit 83abcba has caused it, what makes me baffled is this patch had passed all the regression. Are we installing nfs so files in slave machines which can only way to avoid these

[Gluster-devel] Nightly regression job with enabling brick multiplexing

2017-04-21 Thread Atin Mukherjee
As we don't run our .t files with brick mux being enabled for every patches can we ensure that there is a nightly regression trigger with brick multiplexing feature being enabled. The reason for this ask is very simple, we have no control on the regression for this feature. I've already seen a

Re: [Gluster-devel] [Gluster-users] Glusterfs meta data space consumption issue

2017-04-16 Thread Atin Mukherjee
On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL wrote: > Hi All, > > Here we have below steps to reproduce the issue > > Reproduction steps: > > > > root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force > - create the gluster volume > > volume

<    1   2   3   4   5   6   7   8   >