[Gluster-devel] tests are timing out in master branch

2019-05-14 Thread Atin Mukherjee
There're random tests which are timing out after 200 secs. My belief is this is a major regression introduced by some commit recently or the builders have become extremely slow which I highly doubt. I'd request that we first figure out the cause, get master back to it's proper health and then get

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-08 Thread Atin Mukherjee
On Wed, May 8, 2019 at 7:38 PM Atin Mukherjee wrote: > builder204 needs to be fixed, too many failures, mostly none of the > patches are passing regression. > And with that builder201 joins the pool, https://build.gluster.org/job/centos7-regression/5943/consoleFull > On Wed, May

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-08 Thread Atin Mukherjee
builder204 needs to be fixed, too many failures, mostly none of the patches are passing regression. On Wed, May 8, 2019 at 9:53 AM Atin Mukherjee wrote: > > > On Wed, May 8, 2019 at 7:16 AM Sanju Rakonde wrote: > >> Deepshikha, >> >> I see the failure here[1] w

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-07 Thread Atin Mukherjee
ion tonight, so if I find nothing, >>> until I leave, I guess Deepshika will have to look. >>> >>> > On Wed, Apr 24, 2019 at 5:30 PM Yaniv Kaul wrote: >>> > >>> > > >>> > > >>> > > On Tue, Apr

Re: [Gluster-devel] [Gluster-users] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-07 Thread Atin Mukherjee
On Wed, May 8, 2019 at 9:45 AM Atin Mukherjee wrote: > > > On Wed, May 8, 2019 at 12:08 AM Vijay Bellur wrote: > >> >> >> On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath < >> rab...@redhat.com> wrote: >> >>> >>>

Re: [Gluster-devel] [Gluster-users] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-07 Thread Atin Mukherjee
On Wed, May 8, 2019 at 12:08 AM Vijay Bellur wrote: > > > On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath < > rab...@redhat.com> wrote: > >> >> + 1 to this. >> > > I have updated the footer of gluster-devel. If that looks ok, we can > extend it to gluster-users too. > > In case of a

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 16:07, Amar Tumballi Suryanarayan wrote: > > > On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee wrote: > >> >> >> On Fri, 3 May 2019 at 14:59, Xavi Hernandez wrote: >> >>> Hi Atin, >>> >>> On Fri, May 3, 2019

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 14:59, Xavi Hernandez wrote: > Hi Atin, > > On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee > wrote: > >> I'm bit puzzled on the way coverity is reporting the open defects on GD1 >> component. As you can see from [1], technically we hav

[Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
I'm bit puzzled on the way coverity is reporting the open defects on GD1 component. As you can see from [1], technically we have 6 open defects and all of the rest are being marked as dismissed. We tried to put some additional annotations in the code through [2] to see if coverity starts feeling

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Atin Mukherjee
On Thu, 2 May 2019 at 20:38, Xavi Hernandez wrote: > On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee > wrote: > >> >> >> On Thu, 2 May 2019 at 19:14, Xavi Hernandez >> wrote: >> >>> On Thu, 2 May 2019, 15:37 Milind Changire, wrote: >&g

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Atin Mukherjee
On Thu, 2 May 2019 at 19:14, Xavi Hernandez wrote: > On Thu, 2 May 2019, 15:37 Milind Changire, wrote: > >> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez >> wrote: >> >>> Hi Ashish, >>> >>> On Thu, May 2, 2019 at 2:17 PM Ashish Pandey >>> wrote: >>> Xavi, I would like to keep

Re: [Gluster-devel] Weekly Untriaged Bugs

2019-04-28 Thread Atin Mukherjee
While I understand this report captured bugs filed since last 1 week and do not have ‘Triaged’ keyword, does it make better sense to exclude bugs which aren’t in NEW state? I believe the intention of this report is to check what all bugs haven’t been looked at by maintainers/developers yet. BZs

Re: [Gluster-devel] [Gluster-Maintainers] BZ updates

2019-04-23 Thread Atin Mukherjee
Absolutely agree and I definitely think this would help going forward. On Wed, Apr 24, 2019 at 8:45 AM Nithya Balachandran wrote: > All, > > When working on a bug, please ensure that you update the BZ with any > relevant information as well as the RCA. I have seen several BZs in the > past

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-04-22 Thread Atin Mukherjee
Is this back again? The recent patches are failing regression :-\ . On Wed, 3 Apr 2019 at 19:26, Michael Scherer wrote: > Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit : > > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan > > wrot

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-17 Thread Atin Mukherjee
On Wed, 17 Apr 2019 at 10:53, Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi, > > In my recent test, I found that there are very severe glusterfsd memory > leak when enable socket ssl option > What gluster version are you testing? Would you be able to continue your

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Apr 16, 2019 at 10:27 PM Atin Mukherjee > wrote: > >> >> >> On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee >> wrote: >> >>> >>

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Tue, Apr 16, 2019 at 10:26 PM Atin Mukherjee wrote: > > > On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee > wrote: > >> >> >> On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan >> wrote: >> >>> Status: Tagging pending >>> >

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee wrote: > > > On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan > wrote: > >> Status: Tagging pending >> >> Waiting on patches: >> (Kotresh/Atin) - glusterd: fix loading ctime in client graph logic >> h

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
cent bug (15th April), does not seem to have any critical data > corruption or service availability issues, planning on not waiting for > the fix in 6.1 > > - Shyam > On 4/6/19 4:38 AM, Atin Mukherjee wrote: > > Hi Mohit, > > > > https://review.gluster.org/22495 should

[Gluster-devel] test failure reports for last 15 days

2019-04-10 Thread Atin Mukherjee
o a stage where master is unstable and we have to lock down the merges till all these failures are resolved. So please help. (Please note fstat stats show up the retries as failures too which in a way is right) On Tue, Feb 26, 2019 at 5:27 PM Atin Mukherjee wrote: > [1] captures the test fai

Re: [Gluster-devel] SHD crash in https://build.gluster.org/job/centos7-regression/5510/consoleFull

2019-04-10 Thread Atin Mukherjee
Rafi mentioned to me earlier that this will be fixed through https://review.gluster.org/22468 . This crash is more often seen in the nightly regression these days. Patch needs review and I'd request the respective maintainers to take a look at it. On Wed, Apr 10, 2019 at 5:08 PM Nithya

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-06 Thread Atin Mukherjee
Hi Mohit, https://review.gluster.org/22495 should get into 6.1 as it’s a regression. Can you please attach the respective bug to the tracker Ravi pointed out? On Sat, 6 Apr 2019 at 12:00, Ravishankar N wrote: > Tracker bug is https://bugzilla.redhat.com/show_bug.cgi?id=1692394, in > case

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-04-04 Thread Atin Mukherjee
ael Scherer a écrit : > > Le jeudi 04 avril 2019 à 13:53 +0200, Michael Scherer a écrit : > > > Le jeudi 04 avril 2019 à 16:13 +0530, Atin Mukherjee a écrit : > > > > Based on what I have seen that any multi node test case will fail > > > > and > >

[Gluster-devel] rebal-all-nodes-migrate.t always fails now

2019-04-04 Thread Atin Mukherjee
Based on what I have seen that any multi node test case will fail and the above one is picked first from that group and If I am correct none of the code fixes will go through the regression until this is fixed. I suspect it to be an infra issue again. If we look at

[Gluster-devel] shd multiplexing patch has introduced coverity defects

2019-04-04 Thread Atin Mukherjee
Based on yesterday's coverity scan report, 6 defects are introduced because of the shd multiplexing patch. Could you address them, Rafi? ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

2019-04-03 Thread Atin Mukherjee
t; can happen or not. > That's precisely what the question is. Why suddenly we're seeing this happening too frequently. Today I saw atleast 4 to 5 such failures already. Deepshika - Can you please help in inspecting this? > Regards, > Jiffin > > > - Original Message -

[Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

2019-04-02 Thread Atin Mukherjee
I'm observing the above test function failing too often because of which arbiter-mount.t test fails in many regression jobs. Such frequency of failures wasn't there earlier. Does anyone know what has changed recently to cause these failures in regression? I also hear when such failure happens a

[Gluster-devel] Backporting important fixes in release branches

2019-04-02 Thread Atin Mukherjee
Off late my observation has been that we're missing to backport critical/important fixes into the release branches and we do a course of correction when users discover the problems which isn't a great experience. I request all developers and maintainers to pay some attention on (a) deciding on

Re: [Gluster-devel] [ovirt-users] oVirt Survey 2019 results

2019-04-02 Thread Atin Mukherjee
Thanks Sahina for including Gluster community mailing lists. As Sahina already mentioned we had a strong focus on upgrade testing path before releasing glusterfs-6. We conducted test day and along with functional pieces, tested upgrade paths like from 3.12, 4 & 5 to release-6, we encountered

Re: [Gluster-devel] [Gluster-users] Quick update on glusterd's volume scalability improvements

2019-03-29 Thread Atin Mukherjee
On Sat, 30 Mar 2019 at 08:06, Vijay Bellur wrote: > > > On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee > wrote: > >> All, >> >> As many of you already know that the design logic with which GlusterD >> (here on to be referred as GD1) was implemen

[Gluster-devel] Quick update on glusterd's volume scalability improvements

2019-03-29 Thread Atin Mukherjee
All, As many of you already know that the design logic with which GlusterD (here on to be referred as GD1) was implemented has some fundamental scalability bottlenecks at design level, especially around it's way of handshaking configuration meta data and replicating them across all the peers.

Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-22 Thread Atin Mukherjee
On Fri, 22 Mar 2019 at 20:07, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Wed, Mar 20, 2019 at 10:00 AM Atin Mukherjee > wrote: > > > > From glusterd perspective couple of enhancements I'd propose to be added > (a) to capture get-

[Gluster-devel] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Atin Mukherjee
All, In the last few releases of glusterfs, with stability as a primary theme of the releases, there has been lots of changes done on the code optimization with an expectation that such changes will have gluster to provide better performance. While many of these changes do help, but off late we

Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Atin Mukherjee
>From glusterd perspective couple of enhancements I'd propose to be added (a) to capture get-state dump and make it part of sosreport . Off late, we have seen get-state dump has been very helpful in debugging few cases apart from it's original purpose of providing source of cluster/volume

Re: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-07 Thread Atin Mukherjee
ill do the rest by next Monday. That’d help us to filter out the unnecessary ones and get to know how many actual blockers we have. On Tue, Mar 5, 2019 at 11:51 PM Shyam Ranganathan wrote: > On 3/4/19 12:33 PM, Shyam Ranganathan wrote: > > On 3/4/19 10:08 AM, Atin Mukherjee wrote: > >

Re: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Atin Mukherjee
On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan wrote: > Thanks to those who participated. > > Update at present: > > We found 3 blocker bugs in upgrade scenarios, and hence have marked release > as pending upon them. We will keep these lists updated about progress. I’d like to clarify

[Gluster-devel] test failure reports for last 30 days

2019-02-26 Thread Atin Mukherjee
[1] captures the test failures report since last 30 days and we'd need volunteers/component owners to see why the number of failures are so high against few tests. [1] https://fstat.gluster.org/summary?start_date=2019-01-26_date=2019-02-25=all ___

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-16 Thread Atin Mukherjee
On Tue, Jan 15, 2019 at 2:13 PM Atin Mukherjee wrote: > Interesting. I’ll do a deep dive at it sometime this week. > > On Tue, 15 Jan 2019 at 14:05, Xavi Hernandez wrote: > >> On Mon, Jan 14, 2019 at 11:08 AM Ashish Pandey >> wrote: >> >>> >>> I

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-15 Thread Atin Mukherjee
=, parent=0x7f8374ff8de0, data=0x7f83a8030ab0) at >>> ec-heald.c:294 >>> #4 0x7f83bc930ac2 in syncop_ftw (subvol=0x7f83a801b890, >>> loc=loc@entry=0x7f8374ff8de0, pid=pid@entry=-6, >>> data=data@entry=0x7f83a8030ab0, >>> fn=fn@entry=0x7f83add03140 ) at s

[Gluster-devel] GCS 0.5 release

2019-01-10 Thread Atin Mukherjee
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: - GCS environment updated to kube 1.13 - CSI deployment moved to 1.0 - Integrated Anthill deployment - Kube & etcd metrics added to prometheus - Tuning of etcd to increase

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-10 Thread Atin Mukherjee
Mohit, Sanju - request you to investigate the failures related to glusterd and brick-mux and report back to the list. On Thu, Jan 10, 2019 at 12:25 AM Shyam Ranganathan wrote: > Hi, > > As part of branching preparation next week for release-6, please find > test failures and respective test

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1060

2018-12-30 Thread Atin Mukherjee
tests/bugs/ec/bug-1236065.t is failing quite regularly in brick multiplexing regression jobs. Request to get this fixed at earliest. -- Forwarded message - From: Date: Mon, Dec 31, 2018 at 6:30 AM Subject: [Gluster-Maintainers] Build failed in Jenkins:

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4293

2018-12-30 Thread Atin Mukherjee
Can we please check the reason of the failures? -- Forwarded message - From: Date: Sat, 29 Dec 2018 at 23:48 Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4293 To: See <

[Gluster-devel] Update on GCS 0.5 release

2018-12-24 Thread Atin Mukherjee
We've decided to delay GCS 0.5 release and postpone by few days (new date : 1st week of Jan) considering (a) most of the team members are out on holidays (b) some of the critical issues/PRs are yet to be addressed from [1] . Regards, GCS team [1] https://waffle.io/gluster/gcs?label=GCS%2F0.5

[Gluster-devel] GCS 0.4 release

2018-12-12 Thread Atin Mukherjee
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try

Re: [Gluster-devel] Shard test failing more commonly on master

2018-12-04 Thread Atin Mukherjee
We can't afford to keep a bad test hanging for more than a day which penalizes other fixes to be blocked (I see atleast 4-5 more patches failed on the same test today). I thought we already had a rule to mark a test bad at earliest in such occurrences. Not sure why we haven't done that yet. In

Re: [Gluster-devel] GD2 & glusterfs smoke issue

2018-11-08 Thread Atin Mukherjee
On Thu, 8 Nov 2018 at 15:07, Yaniv Kaul wrote: > > > On Tue, Nov 6, 2018 at 11:34 AM Atin Mukherjee > wrote: > >> We have enabled GD2 smoke results as a mandatory vote in glusterfs smoke >> since yesterday through BZ [1], however we just started seeing GD2 smoke >&

[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2018-11-06 Thread Atin Mukherjee
new defects introduced in posix xlator. -- Forwarded message - From: Date: Tue, Nov 6, 2018 at 8:44 PM Subject: New Defects reported by Coverity Scan for gluster/glusterfs Hi, Please find the latest report on new defect(s) introduced to gluster/glusterfs found with Coverity

[Gluster-devel] GD2 & glusterfs smoke issue

2018-11-06 Thread Atin Mukherjee
We have enabled GD2 smoke results as a mandatory vote in glusterfs smoke since yesterday through BZ [1], however we just started seeing GD2 smoke failing which means glusterfs smoke on all the patches will not go through at this moment.. GD2 dev is currently working on it and trying to rectify the

Re: [Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Atin Mukherjee
Thank you Rahul for the report. This does help to keep community up to date on the effort being put up here and understand where the things stand. Some comments inline. On Sun, Nov 4, 2018 at 8:01 PM Rahul Hinduja wrote: > Hello, > > Over past few weeks, few folks are engaged in integrating gd2

[Gluster-devel] Update on GCS 0.2 release

2018-10-29 Thread Atin Mukherjee
GCS 0.2 release is being bit delayed and we expect to have this out by this week. The primary reason being one of the issue filed under GCS which was highlighted as a critical issue at [1] . Team is working on this issue on active basis to understand if this has something to do with

Re: [Gluster-devel] Gluster Weekly Report : Static Analyser

2018-10-26 Thread Atin Mukherjee
On Fri, 26 Oct 2018 at 21:17, Sunny Kumar wrote: > Hello folks, > > The current status of static analyser is below: > > Coverity scan status: > Last week we started from 145 and now its 135 (26st Oct scan) and 3 > new defects got introduced. We fixed all 3 of them. > Major contributors - Sunny

[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2018-10-12 Thread Atin Mukherjee
Write behind related changes introduced new defects. -- Forwarded message - From: Date: Fri, 12 Oct 2018 at 20:43 Subject: New Defects reported by Coverity Scan for gluster/glusterfs To: Hi, Please find the latest report on new defect(s) introduced to gluster/glusterfs found

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-10 Thread Atin Mukherjee
On Wed, 10 Oct 2018 at 20:30, Shyam Ranganathan wrote: > The following options were added post 4.1 and are part of 5.0 as the > first release for the same. They were added in as part of bugs, and > hence looking at github issues to track them as enhancements did not > catch the same. > > We need

Re: [Gluster-devel] Nightly build status (week of 01 - 07 Oct, 2018)

2018-10-09 Thread Atin Mukherjee
On Wed, Oct 10, 2018 at 4:20 AM Shyam Ranganathan wrote: > We have a set of 4 cores which seem to originate from 2 bugs as filed > and referenced below. > > Bug 1: https://bugzilla.redhat.com/show_bug.cgi?id=1636570 > Cleanup sequence issues in posix xlator. Mohit/Xavi/Du/Pranith are we >

[Gluster-devel] GCS 0.1 release!

2018-10-09 Thread Atin Mukherjee
== Overview Today, we are announcing the availability of GCS (Gluster Container Storage) 0.1. This initial release is designed to provide a platform for community members to try out and provide feedback on the new Gluster container storage stack. This new stack is a collaboration across a number

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-05 Thread Atin Mukherjee
On Fri, 5 Oct 2018 at 20:29, Shyam Ranganathan wrote: > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote: > > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > >> RC1 would be around 24th of Sep. with final release tagging around 1st > >> of Oct. > > > > RC1 now stands to be tagged tomorrow, and

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Atin Mukherjee
Deepshika, Please keep us posted on if you see the particular glusterd test failing again. It’ll be great to see this nightly job green sooner than later :-) . On Thu, 4 Oct 2018 at 15:07, Deepshikha Khandelwal wrote: > On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde wrote: > > > > > > > > On

Re: [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Atin Mukherjee
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan wrote: > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > > RC1 would be around 24th of Sep. with final release tagging around 1st > > of Oct. > > RC1 now stands to be tagged tomorrow, and patches that are being > targeted for a back port

Re: [Gluster-devel] Status update : Brick Mux threads reduction

2018-10-03 Thread Atin Mukherjee
I have rebased [1] and triggered brick-mux regression as we fixed one genuine snapshot test failure in brick mux through https://review.gluster.org/#/c/glusterfs/+/21314/ which got merged today. On Thu, Oct 4, 2018 at 10:39 AM Poornima Gurusiddaiah wrote: > Hi, > > For each brick, we create

Re: [Gluster-devel] Proposal to change Gerrit -> Bugzilla updates

2018-09-11 Thread Atin Mukherjee
On Mon, Sep 10, 2018 at 7:09 PM Shyam Ranganathan wrote: > On 09/10/2018 08:37 AM, Nigel Babu wrote: > > Hello folks, > > > > We now have review.gluster.org as an > > external tracker on Bugzilla. Our current automation when there is a > > bugzilla attached to a patch

[Gluster-devel] glusterd.log file - few observations

2018-09-09 Thread Atin Mukherjee
As highlighted in the last maintainers meeting that I'm seeing some log entries in the glusterd log file which are (a) informative logs in one way but can cause excessive logging and potentially may run an user with out of space issue (b) some logs might not be errors or be avoided. Even though

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4067

2018-08-17 Thread Atin Mukherjee
C7 nightly has a crash too. -- Forwarded message - From: Date: Sat, 18 Aug 2018 at 00:01 Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4067 To: See < https://build.gluster.org/job/regression-test-burn-in/4067/display/redirect?page=changes >

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #831

2018-08-17 Thread Atin Mukherjee
This is the first nightly job failure since we reopened the master branch. Crash seems to be from fini () code path. Need investigation and RCA here. -- Forwarded message - From: Date: Fri, 17 Aug 2018 at 23:54 Subject: [Gluster-Maintainers] Build failed in Jenkins:

Re: [Gluster-devel] Master branch is closed

2018-08-13 Thread Atin Mukherjee
Nigel, Now that mater branch is reopened, can you please revoke the commit access restrictions? On Mon, 6 Aug 2018 at 09:12, Nigel Babu wrote: > Hello folks, > > Master branch is now closed. Only a few people have commit access now and > it's to be exclusively used to merge fixes to make

[Gluster-devel] tests/basic/afr/sparse-file-self-heal.t - crash generated

2018-08-11 Thread Atin Mukherjee
https://build.gluster.org/job/regression-on-demand-multiplex/217/consoleFull tests/basic/afr/sparse-file-self-heal.t crashed ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Out of regression builders

2018-08-11 Thread Atin Mukherjee
As both Shyam & I are running multiple flavours of manually triggered regression jobs (lcov, centos-7, brick-mux) on top of https://review.gluster.org/#/c/glusterfs/+/20637/ , we'd need to occupy most the builders. I have currently run out of builders to trigger some of the runs and have observed

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Fri, August 9th)

2018-08-11 Thread Atin Mukherjee
I saw the same behaviour for https://build.gluster.org/job/regression-on-demand-full-run/47/consoleFull as well. In both the cases the common pattern is if a test was retried but overall the job succeeded. Is this a bug which got introduced recently? At the moment, this is blocking us to debug any

[Gluster-devel] tests/bugs/core/multiplex-limit-issue-151.t timed out

2018-08-10 Thread Atin Mukherjee
https://build.gluster.org/job/line-coverage/455/consoleFull 1 test failed: tests/bugs/core/multiplex-limit-issue-151.t (timed out) The last job https://build.gluster.org/job/line-coverage/454/consoleFull took only 21 secs, so we're not anyway near to breaching the threshold of the timeout secs.

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Thu, August 09th)

2018-08-10 Thread Atin Mukherjee
Pranith, https://review.gluster.org/c/glusterfs/+/20685 seems to have caused multiple failure runs out of https://review.gluster.org/c/glusterfs/+/20637/8 out of yesterday's report. Did you get a chance to look at it? On Fri, Aug 10, 2018 at 1:03 PM Pranith Kumar Karampuri wrote: > > > On Fri,

[Gluster-devel] tests/bugs/glusterd/quorum-validation.t ==> glusterfsd core

2018-08-08 Thread Atin Mukherjee
See https://build.gluster.org/job/line-coverage/435/consoleFull . core file can be extracted from [1] The core] seems to be coming from changelog xlator. Please note line-cov doesn't run with brick mux enabled. [1]

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-08 Thread Atin Mukherjee
On Thu, 9 Aug 2018 at 06:34, Shyam Ranganathan wrote: > Today's patch set 7 [1], included fixes provided till last evening IST, > and its runs can be seen here [2] (yay! we can link to comments in > gerrit now). > > New failures: (added to the spreadsheet) >

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-08 Thread Atin Mukherjee
On Wed, Aug 8, 2018 at 5:08 AM Shyam Ranganathan wrote: > Deserves a new beginning, threads on the other mail have gone deep enough. > > NOTE: (5) below needs your attention, rest is just process and data on > how to find failures. > > 1) We are running the tests using the patch [2]. > > 2) Run

Re: [Gluster-devel] Test: ./tests/bugs/ec/bug-1236065.t

2018-08-07 Thread Atin Mukherjee
+Mohit Requesting Mohit for help. On Wed, 8 Aug 2018 at 06:53, Shyam Ranganathan wrote: > On 08/07/2018 07:37 PM, Shyam Ranganathan wrote: > > 5) Current test failures > > We still have the following tests failing and some without any RCA or > > attention, (If something is incorrect, write

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-05 Thread Atin Mukherjee
On Mon, 6 Aug 2018 at 06:09, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Mon, Aug 6, 2018 at 5:17 AM, Amye Scavarda wrote: > > > > > > On Sun, Aug 5, 2018 at 3:24 PM Shyam Ranganathan > > wrote: > >> > >> On 07/31/2018 07:16 AM, Shyam Ranganathan wrote: > >> > On

[Gluster-devel] validation-server-quorum.t crash

2018-08-04 Thread Atin Mukherjee
The patch [1] addresses the $Subject and it needs to get in the master to address the frequent failures. Request for your reviews. [1] https://review.gluster.org/#/c/20584/ ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Atin Mukherjee
New addition - tests/basic/volume.t - failed twice atleast with shd core. One such ref - https://build.gluster.org/job/centos7-regression/2058/console On Thu, Aug 2, 2018 at 6:28 PM Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Thu, Aug 2, 2018 at 5:48 PM, Kotresh

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Atin Mukherjee
On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez > wrote: > >> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee >> wrote: >> >>> >>> >&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Atin Mukherjee
On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee wrote: > I just went through the nightly regression report of brick mux runs and > here's what I can sum

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Atin Mukherjee
4/console >> >> -Krutika >> >> On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee >> wrote: >> >>> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times) >>> running with master branch. As per my knowledge I've not seen th

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-07-31 Thread Atin Mukherjee
I just went through the nightly regression report of brick mux runs and here's what I can summarize. = Fails only with brick-mux

[Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-07-29 Thread Atin Mukherjee
tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times) running with master branch. As per my knowledge I've not seen this test failing earlier. Looks like some recent changes has caused it. One of such instance is https://build.gluster.org/job/centos7-regression/1955/ . Request

[Gluster-devel] tests/00-geo-rep/georep-basic-dr-tarssh.t times out after 200 secs

2018-07-10 Thread Atin Mukherjee
https://build.gluster.org/job/regression-on-demand-multiplex/115/consoleFull is one such reference. I am sure we should see in non brick mux regression reports too. ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-infra] bug-1432542-mpx-restart-crash.t failing

2018-07-07 Thread Atin Mukherjee
https://build.gluster.org/job/regression-test-with-multiplex/794/display/redirect has the same test failing. Is the reason of the failure different given this is on jenkins? On Sat, 7 Jul 2018 at 19:12, Deepshikha Khandelwal wrote: > Hi folks, > > The issue[1] has been resolved. Now the

Re: [Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Atin Mukherjee
On Fri, 29 Jun 2018 at 21:35, Poornima Gurusiddaiah wrote: > > > On Fri, Jun 29, 2018, 5:54 PM Mohit Agrawal wrote: > >> Hi Poornima, >> >> It seems the test case(tests/bugs/core/bug-1432542-mpx-restart-crash.t) >> is crashing because in quota xlator rpc-cleanup code is not perfectly >> handled

Re: [Gluster-devel] Running regressions with GD2

2018-06-01 Thread Atin Mukherjee
Thanks Nigel for initiating this email. This is a pending (and critical) task for us to get back on to it and take it to completion as getting a confidence on overall gluster features to work with GD2, its a very important task for us. Its just that in 4.1 team was busy in finishing some of the

Re: [Gluster-devel] trash.t failure

2018-04-18 Thread Atin Mukherjee
patch. > Please re-land the patch with any fixes as a fresh review. > Thanks Nigel. The patches waiting on the regression queue need to be rebased. Only doing a ‘recheck centos’ is not going to be helpful. > > On Wed, Apr 18, 2018 at 8:25 AM, Atin Mukherjee <amukh...@re

[Gluster-devel] trash.t failure

2018-04-17 Thread Atin Mukherjee
commit d206fab73f6815c927a84171ee9361c9b31557b1 Author: Kinglong Mee Date: Mon Apr 9 08:33:51 2018 -0400 storage/posix: add pgfid in readdirp if needed Change-Id: I6745428fd9d4e402bf2cad52cee8ab46b7fd822f fixes: bz#1560319 Signed-off-by: Kinglong Mee

Re: [Gluster-devel] Regression with brick multiplex on demand

2018-04-17 Thread Atin Mukherjee
Super useful. Thanks Nigel (and Amar for the idea). On Tue, Apr 17, 2018 at 12:04 PM, Nigel Babu wrote: > Hello folks, > > In the past if you had a patch that was fixing a brick multiplex failure, > you couldn't test whether it actually fixed brick multiplex failures >

[Gluster-devel] replace-brick commit force fails in multi node cluster

2018-03-27 Thread Atin Mukherjee
While writing a test for the patch fix of BZ https://bugzilla.redhat.com/show_bug.cgi?id=1560957 I just can't make my test case to pass where a replace brick commit force always fails on a multi node cluster and that's on the latest mainline code. *The fix is a one liner:*

Re: [Gluster-devel] ./tests/basic/md-cache/bug-1418249.t failing

2018-03-26 Thread Atin Mukherjee
We have more problems, nl-cache.t is also failing. I think need to make sure that any changes a patch introduces on any of the group profiles, a full regression is triggered. On Mon, 26 Mar 2018 at 09:50, Susant Palai wrote: > Sent a patch here -

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-16 Thread Atin Mukherjee
On Fri, Mar 16, 2018 at 11:03 AM, Vijay Bellur <vbel...@redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 9:48 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> >> >> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur <vbel...@redhat.com> wr

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-14 Thread Atin Mukherjee
02:25 PM, Vijay Bellur wrote: >> >> >> >> >> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY >> >> <kkeit...@redhat.com <mailto:kkeit...@redhat.com>> wrote: >> >> >> >> On 03/12/2018 02:32 PM, Shyam Rangan

Re: [Gluster-devel] Announcing Softserve- serve yourself a VM

2018-03-12 Thread Atin Mukherjee
On Wed, Feb 28, 2018 at 6:56 PM, Deepshikha Khandelwal wrote: > Hi, > > We have launched the alpha version of SOFTSERVE[1], which allows Gluster > Github organization members to provision virtual machines for a specified > duration of time. These machines will be deleted

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-12 Thread Atin Mukherjee
On Mon, Mar 12, 2018 at 5:51 PM, Amar Tumballi wrote: > Hi all, > > Below is the proposal which most of us in maintainers list have agreed > upon. Sharing it here so we come to conclusion quickly, and move on :-) > > --- > > Until now, Gluster project’s releases followed

Re: [Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Atin Mukherjee
Mohit is aware of this issue and currently working on a patch. On Mon, Mar 12, 2018 at 9:47 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, > In https://build.gluster.org/job/centos7-regression/274/consoleFull, > the test in $SUBJECT generated core. It seems to be

Re: [Gluster-devel] [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Atin Mukherjee
On Thu, 8 Mar 2018 at 11:43, Kaushal M wrote: > On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi > wrote: > > Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST) > > > > BJ Link > > > > Bridge: https://bluejeans.com/205933580 > >

Re: [Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-03-07 Thread Atin Mukherjee
On Wed, Mar 7, 2018 at 7:38 AM, Varsha Rao <va...@redhat.com> wrote: > Hello Atin, > > On Tue, Mar 6, 2018 at 10:23 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > > Looks like the failure is back again. Refer > > https://build.gluster.org/job/regres

Re: [Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-03-06 Thread Atin Mukherjee
Looks like the failure is back again. Refer https://build.gluster.org/job/regression-test-with-multiplex/663/console and this has been failing in other occurrences too. On Mon, Feb 26, 2018 at 2:58 PM, Varsha Rao <va...@redhat.com> wrote: > Hi Atin, > > On Mon, Feb 26, 2018 at

Re: [Gluster-devel] two potential memory leak place found on glusterfs 3.12.3

2018-02-26 Thread Atin Mukherjee
+Gaurav On Mon, Feb 26, 2018 at 2:02 PM, Raghavendra Gowdappa wrote: > +glusterd devs > > On Mon, Feb 26, 2018 at 1:41 PM, Storage, Dev (Nokia - Global) < > dev.stor...@nokia.com> wrote: > >> Hi glusterfs experts, >> >>Good day! >> >>During our recent test

  1   2   3   4   5   6   7   8   >