Re: [Gluster-devel] Query regards to expose client-pid to fuse process

2019-10-11 Thread Nithya Balachandran
On Fri, 11 Oct 2019 at 14:56, Mohit Agrawal wrote: > Hi, > > I have a query specific to authenticate a client based on the PID > (client-pid). > It can break the bricks xlator functionality, Usually, on the brick side > we take a decision about the >source of fop request based on PID.If

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
cted to glusterfs community. > > Regards, > Abhishek > > On Thu, Jun 6, 2019, 16:08 Nithya Balachandran > wrote: > >> Hi Abhishek, >> >> I am still not clear as to the purpose of the tests. Can you clarify why >> you are using valgrind and why you think t

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
the below script to see the memory increase whihle the script is > above script is running in background. > > *ps_mem.py* > > I am attaching the script files as well as the result got after testing > the scenario. > > On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran >

Re: [Gluster-devel] [Gluster-users] Memory leak in glusterfs

2019-06-05 Thread Nithya Balachandran
Hi, Writing to a volume should not affect glusterd. The stack you have shown in the valgrind looks like the memory used to initialise the structures glusterd uses and will free only when it is stopped. Can you provide more details to what it is you are trying to test? Regards, Nithya On Tue,

[Gluster-devel] BZ updates

2019-04-23 Thread Nithya Balachandran
All, When working on a bug, please ensure that you update the BZ with any relevant information as well as the RCA. I have seen several BZs in the past which report crashes, however they do not have a bt or RCA captured. Having this information in the BZ makes it much easier to see if a newly

[Gluster-devel] SHD crash in https://build.gluster.org/job/centos7-regression/5510/consoleFull

2019-04-10 Thread Nithya Balachandran
Hi, My patch is unlikely to have caused this as the changes are only in dht. Can someone take a look? Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-04-05 Thread Nithya Balachandran
On Fri, 5 Apr 2019 at 12:16, Michael Scherer wrote: > Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit : > > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit : > > > I'm not convinced this is solved. Just had what I believe is a > > > similar > > > failure: > > > > > >

Re: [Gluster-devel] [Gluster-users] Prioritise local bricks for IO?

2019-03-28 Thread Nithya Balachandran
On Wed, 27 Mar 2019 at 20:27, Poornima Gurusiddaiah wrote: > This feature is not under active development as it was not used widely. > AFAIK its not supported feature. > +Nithya +Raghavendra for further clarifications. > This is not actively supported - there has been no work done on this

Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Nithya Balachandran
On Thu, 21 Mar 2019 at 21:14, Yaniv Kaul wrote: > > > On Thu, Mar 21, 2019 at 5:23 PM Nithya Balachandran > wrote: > >> >> >> On Thu, 21 Mar 2019 at 16:16, Atin Mukherjee wrote: >> >>> All, >>> >>> In the last few releases o

Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Nithya Balachandran
On Thu, 21 Mar 2019 at 16:16, Atin Mukherjee wrote: > All, > > In the last few releases of glusterfs, with stability as a primary theme > of the releases, there has been lots of changes done on the code > optimization with an expectation that such changes will have gluster to > provide better

Re: [Gluster-devel] GlusterFs v4.1.5: Need help on bitrot detection

2019-02-20 Thread Nithya Balachandran
On Wed, 20 Feb 2019 at 21:03, Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > Hi Chandranana, > > We are trying to find a BigEndian platform to test this out at the moment, > will get back to you on this. > > Meantime, did you run the entire regression suit? Is it the only test >

Re: [Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Nithya Balachandran
The volume is not stopped before unmounting the bricks. I will send a fix. On Wed, 13 Feb 2019 at 10:00, Raghavendra Gowdappa wrote: > > > On Wed, Feb 13, 2019 at 9:54 AM Amar Tumballi Suryanarayan < > atumb...@redhat.com> wrote: > >> >> >> On Wed, Feb

Re: [Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Nithya Balachandran
I'll take a look at this today. The logs indicate the test completed in under 3 minutes but something seems to be holding up the cleanup. On Tue, 12 Feb 2019 at 19:30, Raghavendra Gowdappa wrote: > > > On Tue, Feb 12, 2019 at 7:16 PM Mohit Agrawal wrote: > >> Hi, >> >> I have observed the test

Re: [Gluster-devel] https://review.gluster.org/#/c/glusterfs/+/19778/

2019-01-08 Thread Nithya Balachandran
On Wed, 9 Jan 2019 at 08:28, Amar Tumballi Suryanarayan wrote: > > > On Tue, Jan 8, 2019 at 8:04 PM Shyam Ranganathan > wrote: > >> On 1/8/19 8:33 AM, Nithya Balachandran wrote: >> > Shyam, what is your take on this? >> > An upstream user has tried i

Re: [Gluster-devel] https://review.gluster.org/#/c/glusterfs/+/19778/

2019-01-08 Thread Nithya Balachandran
ths > away. > > On Fri, Dec 28, 2018 at 8:19 AM Nithya Balachandran > wrote: > >> Hi, >> >> Can we backport this to release-5 ? We have several reports of high >> memory usage in fuse clients from users and this i

[Gluster-devel] https://review.gluster.org/#/c/glusterfs/+/19778/

2018-12-27 Thread Nithya Balachandran
Hi, Can we backport this to release-5 ? We have several reports of high memory usage in fuse clients from users and this is likely to help. Regards, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] Regression failure: https://build.gluster.org/job/centos7-regression/3678/

2018-11-14 Thread Nithya Balachandran
: > On 11/14/2018 10:04 AM, Nithya Balachandran wrote: > > Hi Mohit, > > > > The regression run in the subject has failed because a brick has crashed > in > > > > bug-1432542-mpx-restart-crash.t > > > > > > *06:03:38* 1 test(s) generated core

[Gluster-devel] Regression failure: https://build.gluster.org/job/centos7-regression/3678/

2018-11-14 Thread Nithya Balachandran
Hi Mohit, The regression run in the subject has failed because a brick has crashed in bug-1432542-mpx-restart-crash.t *06:03:38* 1 test(s) generated core *06:03:38* ./tests/bugs/core/bug-1432542-mpx-restart-crash.t*06:03:38* The brick process has crashed in posix_fs_health_check as

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-11 Thread Nithya Balachandran
On 11 October 2018 at 18:30, Shyam Ranganathan wrote: > On 10/10/2018 11:20 PM, Atin Mukherjee wrote: > > > > > > On Wed, 10 Oct 2018 at 20:30, Shyam Ranganathan > > wrote: > > > > The following options were added post 4.1 and are part of 5.0 as the > > first

Re: [Gluster-devel] [Gluster-Maintainers] Test health report (week ending 19th Aug. 2018)

2018-08-24 Thread Nithya Balachandran
On 20 August 2018 at 23:06, Shyam Ranganathan wrote: > Although tests have stabilized quite a bit, and from the maintainers > meeting we know that some tests have patches coming in, here is a > readout of other tests that needed a retry. We need to reduce failures > on retries as well, to be

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Nithya Balachandran
Is it possible for you to send us the statedump file? It will be easier than going back and forth over emails. Thanks, Nithya On 9 August 2018 at 09:25, huting3 wrote: > Yes, I got the dump file and found there are many huge num_allocs just > like following: > > I found memusage of 4 variable

Re: [Gluster-devel] [Gluster-Maintainers] Test: ./tests/bugs/distribute/bug-1042725.t

2018-08-08 Thread Nithya Balachandran
On 8 August 2018 at 06:11, Shyam Ranganathan wrote: > On 08/07/2018 07:37 PM, Shyam Ranganathan wrote: > > 6) Tests that are addressed or are not occurring anymore are, > > > > ./tests/bugs/distribute/bug-1042725.t > > The above test fails, I think due to cleanup not completing in the > previous

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 2 August 2018 at 05:46, Shyam Ranganathan wrote: > Below is a summary of failures over the last 7 days on the nightly > health check jobs. This is one test per line, sorted in descending order > of occurrence (IOW, most frequent failure is on top). > > The list includes spurious failures as

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 6 August 2018 at 18:03, Nithya Balachandran wrote: > > > On 2 August 2018 at 05:46, Shyam Ranganathan wrote: > >> Below is a summary of failures over the last 7 days on the nightly >> health check jobs. This is one test per line, sorted in descending order >

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-06 Thread Nithya Balachandran
On 2 August 2018 at 05:46, Shyam Ranganathan wrote: > Below is a summary of failures over the last 7 days on the nightly > health check jobs. This is one test per line, sorted in descending order > of occurrence (IOW, most frequent failure is on top). > > The list includes spurious failures as

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Nithya Balachandran
On 31 July 2018 at 22:11, Atin Mukherjee wrote: > I just went through the nightly regression report of brick mux runs and > here's what I can summarize. > > > >

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Nithya Balachandran
On 31 July 2018 at 22:11, Atin Mukherjee wrote: > I just went through the nightly regression report of brick mux runs and > here's what I can summarize. > > > >

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Nithya Balachandran
a look and fix this? To summarize, this is _not_ a spurious failure. regards, Nithya On 3 August 2018 at 14:13, Nithya Balachandran wrote: > This is a new issue - the test uses ls -l to get some information. With > the latest master, ls -l returns strange results the first time it is &g

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Nithya Balachandran
This is a new issue - the test uses ls -l to get some information. With the latest master, ls -l returns strange results the first time it is called on the mount point causing the test to fail: With the latest master, I created a single brick volume and some files inside it. [root@rhgs313-6 ~]#

Re: [Gluster-devel] Tests failing for distributed regression framework

2018-07-18 Thread Nithya Balachandran
Hi Mohit, Please take a look at BZ 1602282 . Thanks, Nithya On 18 July 2018 at 15:12, Deepshikha Khandelwal wrote: > Hi all, > > There are tests which have been constantly failing for distributed > regression framework[1]. I would like to

Re: [Gluster-devel] [Gluster-infra] bug-1432542-mpx-restart-crash.t failing

2018-07-09 Thread Nithya Balachandran
We discussed reducing the number of volumes in the maintainers' meeting.Should we still go ahead and do that? On 9 July 2018 at 15:45, Xavi Hernandez wrote: > On Mon, Jul 9, 2018 at 11:14 AM Karthik Subrahmanya > wrote: > >> Hi Deepshikha, >> >> Are you looking into this failure? I can still

Re: [Gluster-devel] Storing list of dentries of children in parent inode

2018-07-02 Thread Nithya Balachandran
On 29 June 2018 at 13:02, Amar Tumballi wrote: > > > On Fri, Jun 29, 2018 at 12:25 PM, Vijay Bellur wrote: > >> >> >> On Wed, Jun 27, 2018 at 10:15 PM Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> All, >>> >>> There is a requirement in write-behind where during readdirp we may

[Gluster-devel] 4.0 documentation

2018-02-21 Thread Nithya Balachandran
Hi, We need to start on writing up topics for 4.0. If you have worked on any features for 4.0, lets get started on writing those up. Please get in touch with me so I can track those. Regards, Nithya ___ Gluster-devel mailing list

Re: [Gluster-devel] Jenkins Issues this weekend and how we're solving them

2018-02-19 Thread Nithya Balachandran
On 19 February 2018 at 13:12, Atin Mukherjee wrote: > > > On Mon, Feb 19, 2018 at 8:53 AM, Nigel Babu wrote: > >> Hello, >> >> As you all most likely know, we store the tarball of the binaries and >> core if there's a core during regression. Occasionally,

[Gluster-devel] https://review.gluster.org/#/c/19543/

2018-02-12 Thread Nithya Balachandran
Hi, I have been thinking of how to speed up up MAKE_HANDLE_PATH and have put up a test patch ($subject).This will first try with a buffer of 1K and only fall back to the calculation of the patch doesn't fit in that space. I think it likely that 1K will be sufficient for most data sets. A better

[Gluster-devel] https://review.gluster.org/#/c/18893

2018-02-04 Thread Nithya Balachandran
Hi, It looks like this has not been ported to release-3.12 branch. Can the author please do so? Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12.6: Scheduled for the 12th of February

2018-02-02 Thread Nithya Balachandran
On 2 February 2018 at 11:16, Jiffin Tony Thottan wrote: > Hi, > > It's time to prepare the 3.12.6 release, which falls on the 10th of > each month, and hence would be 12-02-2018 this time around. > > This mail is to call out the following, > > 1) Are there any pending

Re: [Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Nithya Balachandran
On 1 February 2018 at 09:46, Milind Changire wrote: > If a *volume stop* fails at a user's production site with a reason like > *rebalance session is active* then the admin will wait for the session to > complete and then reissue a *volume stop*; > > So, in essence, the

[Gluster-devel] GD 2 xlator option changes

2018-01-14 Thread Nithya Balachandran
Hi, A few questions about this: 1. What (if anything) should be done for options like these which have "!" ? /* Switch xlator options (Distribute special case) */ { .key= "cluster.switch", .voltype= "cluster/distribute", .option =

[Gluster-devel] posix_disk_space_check and internal clients

2018-01-10 Thread Nithya Balachandran
Hi, DISK_SPACE_CHECK_AND_GOTO in posix.h allows all calls from internal clients to bypass the check. But in edge cases, internal clients could write to files and fill up the brick. Would it be better to not allow some fops like write in this case for internal clients as well? Regards, Nithya

Re: [Gluster-devel] Revert of 56e5fdae (SSL change) - why?

2018-01-07 Thread Nithya Balachandran
On 7 January 2018 at 18:54, Jeff Darcy wrote: > There's no explanation, or reference to one, in the commit message. In the > comments, there's a claim that seems a bit exaggerated. > > > This is causing almost all the regressions to fail. durbaility-off.t is > the most affected

[Gluster-devel] https://build.gluster.org/job/regression-test-with-multiplex/558/console

2017-12-04 Thread Nithya Balachandran
Hi, This has crashed in server_inode_new in ./tests/bugs/bug-1371806_3.t because inode_table is NULL. I have filed a BZ (1520374) and added initial coredump analysis. Can someone take a look at this? Thanks, Nithya ___ Gluster-devel mailing list

Re: [Gluster-devel] Tests failing on Centos 7

2017-11-27 Thread Nithya Balachandran
On 27-Nov-2017 8:44 AM, "Nigel Babu" wrote: > > Hello folks, > > I have an update on chunking. There's good news and bad. The first bit is > that We a chunked regression job now. It splits it out into 10 chunks that > are run in parallel. This chunking is quite simple at the

Re: [Gluster-devel] Regression failure: ./tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t

2017-11-06 Thread Nithya Balachandran
[glusterd-utils.c:8063:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 30706 This is nearly 25 seconds later and PROCESS_DOWN_TIMEOUT is set to 5. Regards, Nithya On Mon, Nov 6, 2017 at 3:06 PM, Nithya Balachandran <nbala...@redhat.com> > wrote: > >> Hi

[Gluster-devel] Gluster Summit BOF - Rebalance

2017-11-06 Thread Nithya Balachandran
Hi, We had a BOF on Rebalance at the Gluster Summit to get feedback from Gluster users. - Performance has improved over the last few releases and it works well for large files. - However, it is still not fast enough on volumes which contain a lot of directories and small files. The bottleneck

[Gluster-devel] Regression failure : /tests/basic/ec/ec-1468261.t

2017-11-06 Thread Nithya Balachandran
Can someone take a look at this? The run was aborted ( https://build.gluster.org/job/centos6-regression/7232/console) Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Regression failure: ./tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t

2017-11-06 Thread Nithya Balachandran
Hi, Can someone take a look at : https://build.gluster.org/job/centos6-regression/7231/ ? >From the logs: [2017-11-06 08:03:21.200177]:++ G_LOG:./tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t: TEST: 26 1 brick_up_status_1 patchy 127.1.1.3 /d/backends/3/patchy3

Re: [Gluster-devel] Gluster CLI Feedback

2017-10-16 Thread Nithya Balachandran
Gentle reminder. Thanks to those who have already responded. Nithya On 11 October 2017 at 14:38, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi, > > As part of our initiative to improve Gluster usability, we would like > feedback on the current Gluster CLI. Gl

Re: [Gluster-devel] Locating blocker bugs for a release (and what is a blocker anyway)

2017-10-11 Thread Nithya Balachandran
On 12 October 2017 at 10:12, Raghavendra Talur wrote: > On Wed, Oct 11, 2017 at 8:35 PM, Shyam Ranganathan > wrote: > > Hi, > > > > Recently I was in some conversations that mentioned having to hunt for > the > > mail that announced the blocker bug for a

[Gluster-devel] Gluster CLI Feedback

2017-10-11 Thread Nithya Balachandran
Hi, As part of our initiative to improve Gluster usability, we would like feedback on the current Gluster CLI. Gluster 4.0 upstream development is currently in progress and it is an ideal time to consider CLI changes. Answers to the following would be appreciated: 1. How often do you use the

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-24 Thread Nithya Balachandran
It has been a while but iirc snapview client (loaded abt dht/tier etc) had some issues when we ran tiering tests. Rafi might have more info on this - basically it was expecting to find the inode_ctx populated but it was not. On 24 August 2017 at 10:13, Raghavendra G

Re: [Gluster-devel] [Gluster-Maintainers] Proposed Protocol changes for 4.0: Need feedback.

2017-08-15 Thread Nithya Balachandran
On 11 August 2017 at 18:04, Amar Tumballi wrote: > Hi All, > > Below are the proposed protocol changes (ie, XDR changes on the wire) we > are thinking for Gluster 4.0. > > >- rchecksum/fsetattr: Add 'gfid' field on wire > > Basic work already done at

[Gluster-devel] Error messages in the logs:

2017-07-25 Thread Nithya Balachandran
Hi, I've been seeing the following in some of the logs with the recent master builds. Does anyone know why these could be happening? [2017-07-24 16:58:33.561607] E [client_t.c:321:gf_client_ref] (-->/build/install/lib/libgfrpc.so.0(rpcsvc_request_create+0x1af) [0x7f03e4aefc67]

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Nithya Balachandran
On 13 July 2017 at 11:46, Pranith Kumar Karampuri wrote: > > > On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee > wrote: > >> Thank you for response quickly >> >> >> I went through dht_get_du_info before I start developing this. >> >> at that time, I

Re: [Gluster-devel] On change #15468

2017-07-12 Thread Nithya Balachandran
Mohit, Can you set up a call/meeting where you can explain the current patch? That would make it easier to review. Thanks, Nithya On 13 July 2017 at 10:32, Raghavendra Gowdappa wrote: > All, > > Patch [1] is getting more complex day by day. We had to extend permission >

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-23 Thread Nithya Balachandran
On 22 June 2017 at 22:44, Pranith Kumar Karampuri wrote: > > > On Wed, Jun 21, 2017 at 9:12 PM, Shyam wrote: > >> On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote: >> >>> >>> >>> On Tue, Jun 20, 2017 at 7:37 PM, Shyam >>

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Nithya Balachandran
On 21 June 2017 at 10:26, Pranith Kumar Karampuri <pkara...@redhat.com> wrote: > > > On Wed, Jun 21, 2017 at 10:07 AM, Nithya Balachandran <nbala...@redhat.com > > wrote: > >> >> On 20 June 2017 at 20:38, Aravinda <avish...@redhat.com> wrote: >>

Re: [Gluster-devel] geo-rep regression because of node-uuid change

2017-06-20 Thread Nithya Balachandran
On 20 June 2017 at 20:38, Aravinda wrote: > On 06/20/2017 06:02 PM, Pranith Kumar Karampuri wrote: > > Xavi, Aravinda and I had a discussion on #gluster-dev and we agreed to go > with the format Aravinda suggested for now and in future we wanted some > more changes for dht

Re: [Gluster-devel] Rebalance source code

2017-06-08 Thread Nithya Balachandran
On 8 June 2017 at 09:35, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > > - Original Message - > > From: "Tahereh Fattahi" <t28.fatt...@gmail.com> > > To: "Nithya Balachandran" <nbala...@redhat.com>, "Susant P

Re: [Gluster-devel] Release 3.11: Backports needed and regression failure status

2017-05-23 Thread Nithya Balachandran
On 23 May 2017 at 06:56, Shyam wrote: > Hi, > > Backport status: > The following patches are available in release-3.10 but not in > release-3.11, > - https://review.gluster.org/#/c/17197/ (@nithya request the backport) > Apologies. Patch available at

Re: [Gluster-devel] How rebalance volume without changing layout?

2017-05-21 Thread Nithya Balachandran
On 19 May 2017 at 22:52, Tahereh Fattahi wrote: > Hi > Is it any way to rebalance a volume without any change in layout? (for > example the layout changed before and now just need rebalance) > I test rebalance and rebalace force, they change the layout. > I could not do

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Nithya Balachandran
We have one more blocker bug (opened today): https://bugzilla.redhat.com/show_bug.cgi?id=1448307 On 5 May 2017 at 15:31, Kaushal M wrote: > On Thu, May 4, 2017 at 6:40 PM, Kaushal M wrote: > > On Thu, May 4, 2017 at 4:38 PM, Niels de Vos

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Nithya Balachandran
On 2 May 2017 at 16:59, Shyam wrote: > Talur, > > Please wait for this fix before releasing 3.10.2. > > We will take in the change to either prevent add-brick in > sharded+distrbuted volumes, or throw a warning and force the use of --force > to execute this. > > IIUC, the

Re: [Gluster-devel] About inode table: client, server and inconsistency

2017-03-25 Thread Nithya Balachandran
On 22 March 2017 at 12:39, Tahereh Fattahi <t28.fatt...@gmail.com> wrote: > a university project! > > On Sun, Mar 19, 2017 at 11:31 PM, Nithya Balachandran <nbala...@redhat.com > > wrote: > >> >> >> On 18 March 2017 at 21:42, Tahereh Fattahi <t28

Re: [Gluster-devel] About inode table: client, server and inconsistency

2017-03-19 Thread Nithya Balachandran
On 18 March 2017 at 21:42, Tahereh Fattahi wrote: > Thank you very much. > Is it possible to change something in server inode table during a fop from > client? (I want to change the dht_layout of a directory when create a file > in that directory, but I dont know how send

[Gluster-devel] https://review.gluster.org/#/c/16643/

2017-02-20 Thread Nithya Balachandran
Hi, Can this be merged ? This is holding up my 3.9 patch backports. Regards, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Logging in a multi-brick daemon

2017-02-15 Thread Nithya Balachandran
On 16 February 2017 at 07:30, Shyam wrote: > On 02/15/2017 08:51 PM, Atin Mukherjee wrote: > >> >> On Thu, 16 Feb 2017 at 04:09, Jeff Darcy > > wrote: >> >> One of the issues that has come up with multiplexing is that all of

Re: [Gluster-devel] 3.10 release and testing improvements

2017-02-01 Thread Nithya Balachandran
On 2 February 2017 at 09:26, Nithya Balachandran <nbala...@redhat.com> wrote: > > > On 1 February 2017 at 19:27, Raghavendra Talur <rta...@redhat.com> wrote: > >> Hi all, >> >> As we approach release of 3.10, we should take a look at existing &g

Re: [Gluster-devel] 3.10 release and testing improvements

2017-02-01 Thread Nithya Balachandran
On 1 February 2017 at 19:27, Raghavendra Talur wrote: > Hi all, > > As we approach release of 3.10, we should take a look at existing > tests are being skipped because of issues. These two weeks should be > right time for us to focus on tests as feature patches have been >

[Gluster-devel] Spurious regression failure? tests/basic/ec/ec-background-heals.t

2017-01-23 Thread Nithya Balachandran
Hi, Can you please take a look at https://build.gluster.org/job/centos6-regression/2859/console ? tests/basic/ec/ec-background-heals.t has failed. Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] Regression failure: https://build.gluster.org/job/centos6-regression/2757/consoleFull

2017-01-19 Thread Nithya Balachandran
Hi, Can someone take a look at this? The test that failed is: tests/bugs/replicate/bug-1297695.t Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.10: Pending reviews

2017-01-17 Thread Nithya Balachandran
On 17 January 2017 at 00:38, Shyam wrote: > Hi, > > Release 3.10 branching is slated for tomorrow 17th Jan. This means > features that slip the branching date need to be a part of the next release > (if they are ready by then), i.e 3.11. > > Of course there are going to be

Re: [Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-15 Thread Nithya Balachandran
On 15 December 2016 at 18:07, Xavier Hernandez wrote: > On 12/15/2016 12:48 PM, Raghavendra Gowdappa wrote: > >> I need to step back a little to understand the RCA correctly. >> >> If I understand the code correctly, the callstack which resulted in >> failed setattr is (in

[Gluster-devel] Release 3.10 feature proposal : Estimate time to complete rebalance

2016-12-06 Thread Nithya Balachandran
There is no way at present to determine when a rebalance operation will complete. This requires admins to keep monitoring the rebalance operation. The proposed approach will calculate the estimated time every time the rebalance status command is issues. The value will be displayed along with the

Re: [Gluster-devel] Dht crash in regression

2016-11-20 Thread Nithya Balachandran
Looks like a tier test. Milind, Can you take a look at this? Thanks, Nithya On 21 November 2016 at 10:26, Poornima Gurusiddaiah wrote: > Hi, > > I see that dht (rebalance?) has generated core during regression run on > 3.7 branch, please take a look. >

[Gluster-devel] Fwd: Feature: Rebalance completion time estimation

2016-11-17 Thread Nithya Balachandran
On 14 November 2016 at 05:10, Shyam wrote: > On 11/11/2016 05:46 AM, Susant Palai wrote: > >> Hello All, >>We have been receiving many requests from users to give a "Rebalance >> completion time estimation". This email is to gather ideas and feedback >> from the

Re: [Gluster-devel] Upstream smoke test failures

2016-11-15 Thread Nithya Balachandran
On 15 November 2016 at 18:55, Vijay Bellur <vbel...@redhat.com> wrote: > On Mon, Nov 14, 2016 at 10:34 PM, Nithya Balachandran > <nbala...@redhat.com> wrote: > > > > > > On 14 November 2016 at 21:38, Vijay Bellur <vbel...@redhat.com> wrote: > >&

[Gluster-devel] Upstream smoke test failures

2016-11-13 Thread Nithya Balachandran
Hi, Our smoke tests have been failing quite frequently of late. While re-triggering smoke several times in order to get a +1 works eventually, this does not really help anything IMO. I believe Jeff has already proposed this earlier but can we remove the failing dbench tests from smoke until we

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-10 Thread Nithya Balachandran
On 8 November 2016 at 20:21, Kyle Johnson wrote: > Hey there, > > We have a number of processes which daily walk our entire directory tree > and perform operations on the found files. > > Pre-gluster, this processes was able to complete within 24 hours of > starting. After

[Gluster-devel] Spurious failure in ./tests/basic/rpc-coverage.t ?

2016-09-23 Thread Nithya Balachandran
https://build.gluster.org/job/centos6-regression/930/console Can someone please take a look at this? Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-15 Thread Nithya Balachandran
On 15 September 2016 at 17:21, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > > - Original Message - > > From: "Xavier Hernandez" <xhernan...@datalab.es> > > To: "Raghavendra G" <raghaven...@gluster.com>, "Nith

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-15 Thread Nithya Balachandran
On 8 September 2016 at 12:02, Mohit Agrawal wrote: > Hi All, > >I have one another solution to heal user xattr but before implement it > i would like to discuss with you. > >Can i call function (dht_dir_xattr_heal internally it is calling > syncop_setxattr) to heal

[Gluster-devel] Rebalance status

2016-09-15 Thread Nithya Balachandran
Hi, While the code defines the following: GF_DEFRAG_STATUS_LAYOUT_FIX_STARTED, GF_DEFRAG_STATUS_LAYOUT_FIX_STOPPED, GF_DEFRAG_STATUS_LAYOUT_FIX_COMPLETE, GF_DEFRAG_STATUS_LAYOUT_FIX_FAILED we don't seem to be using them anywhere. They sound like statuses

[Gluster-devel] Gluster and FreeBSD

2016-09-09 Thread Nithya Balachandran
Hi, I recently debugged a problem [1] where linkfiles were not created properly a gluster volume created using bricks running UFS . Whenever a linkfile was created, the sticky bit was not set on it causing the same file to be listed twice. >From https://www.freebsd.org/cgi/man.cgi?query=chmod=2

Re: [Gluster-devel] regression failed : snapshot/bug-1316437.t

2016-07-25 Thread Nithya Balachandran
More failures: https://build.gluster.org/job/rackspace-regression-2GB-triggered/22452/console I see these messages in the snapd.log: [2016-07-22 05:31:52.482282] I [rpcsvc.c:2199:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64 [2016-07-22

Re: [Gluster-devel] 3.7 regressions on NetBSD

2016-07-22 Thread Nithya Balachandran
On Sat, Jul 23, 2016 at 9:45 AM, Nithya Balachandran <nbala...@redhat.com> wrote: > > > On Fri, Jul 22, 2016 at 9:07 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Fri, Jul 22, 2016 at 8:12 PM, Pranith Kumar Karampuri < &g

Re: [Gluster-devel] 3.7 regressions on NetBSD

2016-07-22 Thread Nithya Balachandran
On Fri, Jul 22, 2016 at 7:42 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, Jul 22, 2016 at 7:39 PM, Nithya Balachandran <nbala...@redhat.com> > wrote: > >> >> >> On Fri, Jul 22, 2016 at 7:31 PM, Jeff Darcy <jda...@redhat

Re: [Gluster-devel] 3.7 regressions on NetBSD

2016-07-22 Thread Nithya Balachandran
On Fri, Jul 22, 2016 at 7:31 PM, Jeff Darcy wrote: > > I attempted to get us more space on NetBSD by creating a new partition > called > > /data and putting /build as a symlink to /data/build. This has caused > > problems > > with tests/basic/quota.t. It's marked as bad for

[Gluster-devel] Regression failure in ./tests/bugs/snapshot/bug-1316437.t

2016-07-21 Thread Nithya Balachandran
Hi, Can someone please take a look at this? Failed run available at: https://build.gluster.org/job/rackspace-regression-2GB-triggered/22406/console Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] 'mv' of ./tests/bugs/posix/bug-1113960.t causes 100% CPU

2016-05-17 Thread Nithya Balachandran
On Tue, May 17, 2016 at 2:52 PM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > > - Original Message - > > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > > To: "Nithya Balachandran" <nbala...@redhat.com> > > C

Re: [Gluster-devel] 'mv' of ./tests/bugs/posix/bug-1113960.t causes 100% CPU

2016-05-17 Thread Nithya Balachandran
Hi, I have looked into this on another system earlier and this is what I have so far: 1. The test involves moving and renaming directories and files within those dirs. 2. A rename dir operation failed on one subvol. So we have 3 subvols where the directory has the new name and one where it has

[Gluster-devel] Upstream regression failure: http://build.gluster.org/job/rackspace-regression-2GB-triggered/19497/

2016-04-05 Thread Nithya Balachandran
Test Summary Report --- ./tests/bugs/replicate/bug-1297695.t (Wstat: 0 Tests: 22 Failed: 1) Failed test: 22 Files=1, Tests=22, 18 wallclock secs ( 0.03 usr 0.00 sys + 2.47 cusr 1.61 csys = 4.11 CPU) Result: FAIL End of test ./tests/bugs/replicate/bug-1297695.t Can someone

Re: [Gluster-devel] Removing Fix layout during attach

2016-02-22 Thread Nithya Balachandran
> Well as add brick to a normal volume do we have this constraint ? > The add brick is a different scenario - regular DHT rebalance requires that all subvols be up. The same need not necessarily be the case for tiering. > - Original Message - > From: "Nithya Bal

Re: [Gluster-devel] Removing Fix layout during attach

2016-02-22 Thread Nithya Balachandran
> +gluster-devel > > - Original Message - > From: "Joseph Fernandes" <josfe...@redhat.com> > To: "Mohammed Rafi K C" <rkavu...@redhat.com> > Cc: "Nithya Balachandran" <nbala...@redhat.com>, rhgs-tier...@redhat.com > S

[Gluster-devel] Retrigger request: https://build.gluster.org/job/rackspace-regression-2GB-triggered/18066/

2016-02-07 Thread Nithya Balachandran
Hi, Can someone please retrigger this regression? It has been aborted in the middle of the run. Thanks, Nithya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.7 pending patches

2016-01-28 Thread Nithya Balachandran
ay Bellur" <vbel...@redhat.com>, "Raghavendra Gowdappa" > <rgowd...@redhat.com>, "Nithya Balachandran" > <nbala...@redhat.com> > Sent: Thursday, 28 January, 2016 8:29:16 PM > Subject: Re: 3.7 pending patches > > > > On 01/28/20

Re: [Gluster-devel] 3.7 pending patches

2016-01-28 Thread Nithya Balachandran
Sorry - forgot to provide the link: http://review.gluster.org/#/c/13262/ Regards, Nithya - Original Message - > From: "Nithya Balachandran" <nbala...@redhat.com> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com> > Cc: "Gluster Devel&quo

Re: [Gluster-devel] [Gluster-infra] NetBSD tests not running to completion.

2016-01-07 Thread Nithya Balachandran
I agree. Regards, Nithya - Original Message - > From: "Atin Mukherjee" > To: "Joseph Fernandes" , "Avra Sengupta" > > Cc: "Gluster Devel" , "gluster-infra" > > Sent:

Re: [Gluster-devel] intermittent test failure: tests/bugs/tier/bug-1279376-rename-demoted-file.t

2015-12-09 Thread Nithya Balachandran
> > > - Original Message - > > From: "Michael Adam" > > To: gluster-devel@gluster.org > > Sent: Wednesday, December 9, 2015 1:46:32 PM > > Subject: [Gluster-devel] intermittent test failure: > > tests/bugs/tier/bug-1279376-rename-demoted-file.t > > > > Hi, > > > >

[Gluster-devel] Upstream regression crash : https://build.gluster.org/job/rackspace-regression-2GB-triggered/16191/consoleFull

2015-11-25 Thread Nithya Balachandran
Hi, The test tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t has failed with a core.Can you please take a look? The NFS log says: gluster/02c803ff8630bd12cc8dc9dc043a6103.socket) [2015-11-25 19:06:41.561937] I [MSGID: 101190]

  1   2   >