Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-05 Thread Ravishankar N
memory usage was changed about 4GB! 发自 网易邮箱大师 <http://u.163.com/signature> On 09/02/2016 09:45, Ravishankar N <mailto:ravishan...@redhat.com> wrote: On 09/02/2016 05:42 AM, Keiviw wrote: Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the nodeB, it

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N
th various attr/entry timeout values. Make sure your volume has a lot of small files. -Ravi 在 2016-09-01 16:37:00,"Ravishankar N" 写道: On 09/01/2016 01:04 PM, Keiviw wrote: Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metadata li

[Gluster-devel] Fwd: [Gluster-users] bug-upcall-stat.t always fails on master

2016-09-01 Thread Ravishankar N
Sorry sent it to users instead of devel. I'll show myself out. Forwarded Message Subject:[Gluster-users] bug-upcall-stat.t always fails on master Date: Thu, 1 Sep 2016 22:41:53 +0530 From: Ravishankar N To: gluster-us...@gluster.org List Test Su

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N
On 09/01/2016 01:04 PM, Keiviw wrote: Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metadata like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to /mnt/glusterfs b

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-28 Thread Ravishankar N
On 08/26/2016 09:39 PM, Pranith Kumar Karampuri wrote: On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri mailto:pkara...@redhat.com>> wrote: hi, Now that we are almost near the feature freeze date (31st of Aug), want to get a sense if any of the status of the feature

Re: [Gluster-devel] [Gluster-users] CFP for Gluster Developer Summit

2016-08-23 Thread Ravishankar N
Hello, Here is a proposal I'd like to make. Title: Throttling in gluster (https://github.com/gluster/glusterfs-specs/blob/master/accepted/throttling.md) Theme: Performance and scalability. The talk/ discussion will be focused on server side throttling of FOPS, using a throttling translator.

Re: [Gluster-devel] [Gluster-users] CFP for Gluster Developer Summit

2016-08-23 Thread Ravishankar N
Hello, Here is a proposal I'd like to make. Title: Throttling in gluster (https://github.com/gluster/glusterfs-specs/blob/master/accepted/throttling.md) Theme: Performance and scalability. The talk/ discussion will be focused on server side throttling of FOPS, using a throttling translator.

Re: [Gluster-devel] Nameless lookup in meta namespace

2016-08-11 Thread Ravishankar N
On 08/12/2016 11:29 AM, Mohammed Rafi K C wrote: Hi, As you may probably know meta xlators provide a /proc kind of virtual name space that can be used to get meta data information for mount process. We are trying to enhance the meta xlator to support more features and to support other protocols.

Re: [Gluster-devel] Suggestions for improving the block/gluster driver in QEMU

2016-07-28 Thread Ravishankar N
On 07/28/2016 04:43 PM, Niels de Vos wrote: posix_discard() in gluster seems to be using fallocate() with >FALLOC_FL_PUNCH_HOLE flag. And posix_zerofill() can be made smarter to use >FALLOC_FL_ZERO_RANGE and fallback to writing zeroes if ZERO_RANGE is not >supported. Oh, nice find! I was expecti

Re: [Gluster-devel] Suggestions for improving the block/gluster driver in QEMU

2016-07-28 Thread Ravishankar N
On 07/28/2016 03:32 PM, Niels de Vos wrote: There are some features in QEMU that we could implement with the existing libgfapi functions. Kevin asked me about this a while back, and I have finally (sorry for the delay Kevin!) taken the time to look into it. There are some optional operations tha

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-19 Thread Ravishankar N
On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards, Kotresh H R arbiter-mount.t has failed despite having this check.:-( -Ravi __

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-19 Thread Ravishankar N
On 07/20/2016 10:43 AM, Poornima Gurusiddaiah wrote: *./tests/basic/afr/arbiter-mount.t* ; Failed *4* times Regression Links: https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull Regression Links: https://build.gluster.org/job/rackspace-regression-2GB-trigg

Re: [Gluster-devel] [Gluster-users] About Gluster cluster availability when one of out of two nodes is down

2016-06-29 Thread Ravishankar N
On 06/30/2016 11:40 AM, Atin Mukherjee wrote: Currently on a two node set up, if node B goes down and node A is rebooted brick process(es) on node A doesn't come up to avoid split brains. This has always been the case. A patch I had sent quite some time back (http://review.gluster.org/#/c/803

Re: [Gluster-devel] netbsd smoke tests fail when code patches are backported to release-3.6

2016-05-20 Thread Ravishankar N
On 05/20/2016 08:13 PM, Angelos SAKELLAROPOULOS wrote: Hi, May I ask why following review requests are not submitted to release-3.6 ? It seems that they fail in netbsd, freebsd smoke tests which are not related to code changes. You'd have to re-trigger tests if they are spurious failures. Type

Re: [Gluster-devel] tarissue.t spurious failure

2016-05-19 Thread Ravishankar N
/job/rackspace-regression-2GB-triggered/20965/consoleFull - Original Message - From: "Ravishankar N" To: "Krutika Dhananjay" Cc: "Gluster Devel" Sent: Thursday, May 19, 2016 6:21:42 PM Subject: Re: [Gluster-devel] tarissue.t spurious failure On 05/19/2016

[Gluster-devel] Unsplit-brain policies patch

2016-05-19 Thread Ravishankar N
Hi Richard, You might already be be getting notifications about the review comments on http://review.gluster.org/#/c/14026/. Wanted your feedback on how you handled cases for the 'majority' policy of resolution when there is no majority for selecting a brick as a source in your original [1]

Re: [Gluster-devel] self heal start failure on 3.8rc1

2016-05-19 Thread Ravishankar N
On 05/20/2016 09:09 AM, Emmanuel Dreyfus wrote: Hello After updating from 3.7.11 to 3.8rc1, self heal daemon will not start anymore. Here is the log. The "op-version >= 30707" error reminds me something we already saw in the past. Yes, since 3.8 was based off master, it has the same issue. ht

Re: [Gluster-devel] tarissue.t spurious failure

2016-05-19 Thread Ravishankar N
On 05/19/2016 04:47 PM, Ravishankar N wrote: On 05/19/2016 04:44 PM, Krutika Dhananjay wrote: Also, I must add that I ran it in a loop on my laptop for about 4 hours and it ran without any failure. There seems to be a genuine problem. The test was failing on my machine 1/4 times on master

Re: [Gluster-devel] tarissue.t spurious failure

2016-05-19 Thread Ravishankar N
On 05/19/2016 04:44 PM, Krutika Dhananjay wrote: Also, I must add that I ran it in a loop on my laptop for about 4 hours and it ran without any failure. There seems to be a genuine problem. The test was failing on my machine 1/4 times on master. -Krutika On Thu, May 19, 2016 at 4:42 PM, K

Re: [Gluster-devel] question on gluster volume heal VOLUME info command

2016-05-10 Thread Ravishankar N
On 05/11/2016 01:19 AM, Angelos SAKELLAROPOULOS wrote: Apart from "split-brain" what other type of problem could the file have in order to be listed there (that potentially the gluster could 'heal') ? The files may not be in split-brain but actually need to be healed (data/ metadata/ entry he

Re: [Gluster-devel] question on gluster volume heal VOLUME info command

2016-05-10 Thread Ravishankar N
On 05/10/2016 04:37 PM, Angelos SAKELLAROPOULOS wrote: Hi, I use Glusterfs (branch v.3.6.9). I would like to ask you what does "gluster volume heal VOLUME" command show ? This launches index heal. I use GlusterFs as shared storage and check the output of "gluster volume heal VOLUME info" and

Re: [Gluster-devel] conflicting keys for option eager-lock

2016-04-08 Thread Ravishankar N
On 04/09/2016 09:52 AM, Vijay Bellur wrote: based on workflow decisions in oVirt, we might even have the volume being marked as unavailable for virtual machine image storage Ah, did not know this. Blocker it is indeed, then. -Ravi ___ Gluster-devel ma

Re: [Gluster-devel] conflicting keys for option eager-lock

2016-04-08 Thread Ravishankar N
On 04/09/2016 07:55 AM, Vijay Bellur wrote: Hey Pranith, Ashish - We have broken support for group virt after the following commit in release-3.7: Just nit-picking, eager-lock is ON by default for replicate volumes, so it is not a deal breaker unless some one explicitly disables the option

Re: [Gluster-devel] Another regression in release-3.7 and master

2016-04-07 Thread Ravishankar N
On 04/07/2016 05:11 PM, Kaushal M wrote: As earlier, please don't merge any more changes on the release-3.7 branch till this is fixed and 3.7.11 is released. http://review.gluster.org/#/c/13925/ (and its corresponding patch in master) has to be merged for 3.7.11. It fixes a performance issue in

[Gluster-devel] tests/performance/open-behind.t fails on NetBSD

2016-04-03 Thread Ravishankar N
Test Summary Report --- ./tests/performance/open-behind.t (Wstat: 0 Tests: 18 Failed: 4) Failed tests: 15-18 https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15553/consoleFull https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15564/conso

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-23 Thread Ravishankar N
On 03/23/2016 09:16 PM, Soumya Koduri wrote: If it occurs only when the file/dir is not actually present at the back-end, shouldn't we fix the server to send ENOENT then? I never fully understood it here is the answer: http://review.gluster.org/#/c/6318/ __

Re: [Gluster-devel] [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Ravishankar N
b.com/e8265ca07f7b19f30bb3 On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote: On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote: OK, I've repeated the test with the following hierarchy: * 10 top-level folders with 10 second-level folders each; * 10 000 files in each second-level folder.

Re: [Gluster-devel] [Gluster-users] Arbiter brick size estimation

2016-03-18 Thread Ravishankar N
ts+results as a reference for others. Regards, Ravi Test script is here: [1] Regards, Oleksandr. [1] http://termbin.com/qlvz On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote: On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote: In order to estimate GlusterFS arbiter brick

Re: [Gluster-devel] Query on healing process

2016-03-14 Thread Ravishankar N
issed (and therefore a pending heal), irrespective of (offset, length). Ravi Regards, Abhishek On Fri, Mar 4, 2016 at 7:00 PM, ABHISHEK PALIWAL mailto:abhishpali...@gmail.com>> wrote: On Fri, Mar 4, 2016 at 6:36 PM, Ravishankar N mailto:ravishan...@redhat.com>> wrot

Re: [Gluster-devel] Status update on SEEK_DATA/HOLE for GlusterFS 3.8

2016-03-11 Thread Ravishankar N
On 03/11/2016 05:07 PM, Niels de Vos wrote: Hi all, I thought I would give a short status update on the tasks related to the new SEEK procedure/FOP that has been added for GlusterFS 3.8. We had several goals, and (most of) the basics have been completed: Great! Thank *you* Niels for doing a maj

Re: [Gluster-devel] Regression: ./tests/basic/afr/sparse-file-self-heal.t fails

2016-03-10 Thread Ravishankar N
On 03/09/2016 11:28 AM, Ravishankar N wrote: On 03/09/2016 11:20 AM, Poornima Gurusiddaiah wrote: Hi, I see the below test failing for an unrelated patch: ./tests/basic/afr/sparse-file-self-heal.t. Failed test : 60 Reqgression: https://build.gluster.org/job/rackspace-netbsd7-regression

Re: [Gluster-devel] Regression: ./tests/basic/afr/sparse-file-self-heal.t fails

2016-03-08 Thread Ravishankar N
On 03/09/2016 11:20 AM, Poornima Gurusiddaiah wrote: Hi, I see the below test failing for an unrelated patch: ./tests/basic/afr/sparse-file-self-heal.t. Failed test : 60 Reqgression: https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15044/consoleCan you please take a look in

Re: [Gluster-devel] [Gluster-users] Arbiter brick size estimation

2016-03-08 Thread Ravishankar N
On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote: In order to estimate GlusterFS arbiter brick size, I've deployed test setup with replica 3 arbiter 1 volume within one node. Each brick is located on separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 + memleak patches. Volume opti

[Gluster-devel] Jenkins regression for release-3.7 messed up?

2016-03-05 Thread Ravishankar N
'brick_up_status' is used by the following .ts in release-3.7 [root@ravi1 glusterfs]# git grep -w brick_up_status tests/bugs/bitrot/bug-1288490.t:EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" brick_up_status $V0 $H0 $B0/brick0 tests/bugs/bitrot/bug-1288490.t:EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" brick

Re: [Gluster-devel] Query on healing process

2016-03-04 Thread Ravishankar N
On 03/04/2016 06:23 PM, ABHISHEK PALIWAL wrote: Ok, just to confirm, glusterd and other brick processes are running after this node rebooted? When you run the above command, you need to check /var/log/glusterfs/glfsheal-volname.log logs errros. Setting client-log-level to D

Re: [Gluster-devel] Default quorum for 2 way replication

2016-03-04 Thread Ravishankar N
On 03/04/2016 05:26 PM, Pranith Kumar Karampuri wrote: hi, So far default quorum for 2-way replication is 'none' (i.e. files/directories may go into split-brain) and for 3-way replication and arbiter based replication it is 'auto' (files/directories won't go into split-brain). There are r

Re: [Gluster-devel] Query on healing process

2016-03-04 Thread Ravishankar N
ense. 002500> Regards, Abhishek On Thu, Mar 3, 2016 at 4:54 PM, ABHISHEK PALIWAL mailto:abhishpali...@gmail.com>> wrote: On Thu, Mar 3, 2016 at 4:10 PM, Ravishankar N mailto:ravishan...@redhat.com>> wrote: Hi, On 03/03/2016 11:14 AM, ABHISHEK PALIWAL wrote:

Re: [Gluster-devel] Query on healing process

2016-03-03 Thread Ravishankar N
r the board which is rebooted I am waiting for your reply please help me out on this issue. Thanks in advanced. Regards, Abhishek On Fri, Feb 26, 2016 at 1:21 PM, ABHISHEK PALIWAL mailto:abhishpali...@gmail.com>> wrote: On Fri, Feb 26, 2016 at 10:28 AM, Ravishankar N mailto:ravishan

Re: [Gluster-devel] Throttling xlator on the bricks

2016-02-26 Thread Ravishankar N
*idx_inode; /* inode ref for xattrop dir */ +call_frame_t*frame; unsigned intentries_healed; unsigned intentries_processed; unsigned int already_healed; Richard F

Re: [Gluster-devel] Query on healing process

2016-02-25 Thread Ravishankar N
same file on two bricks are different , but the volume heal info shows zero entries Solution: But when we tried to put delay > 5 min before the healing everything is working fine. Regards, Abhishek On Fri, Feb 26, 2016 at 6:35 AM, Ravishankar N <mailto:ravishan...@redhat.com&g

Re: [Gluster-devel] Query on healing process

2016-02-25 Thread Ravishankar N
On 02/25/2016 06:01 PM, ABHISHEK PALIWAL wrote: Hi, Here, I have one query regarding the time taken by the healing process. In current two node setup when we rebooted one node then the self-healing process starts less than 5min interval on the board which resulting the corruption of the some f

Re: [Gluster-devel] [FAILED] NetBSD-regression for ./tests/basic/afr/self-heald.t

2016-02-08 Thread Ravishankar N
[Removing Milind, adding Pranith] On 02/08/2016 04:09 PM, Emmanuel Dreyfus wrote: On Mon, Feb 08, 2016 at 04:05:44PM +0530, Ravishankar N wrote: The patch to add it to bad tests has already been merged, so I guess this .t's failure won't pop up again. IMo that was a bit too quic

Re: [Gluster-devel] [FAILED] NetBSD-regression for ./tests/basic/afr/self-heald.t

2016-02-08 Thread Ravishankar N
On 02/08/2016 04:00 PM, Emmanuel Dreyfus wrote: On Mon, Feb 08, 2016 at 10:26:22AM +, Emmanuel Dreyfus wrote: Indeed, same problem. But unfortunately it is not very reproductible since we need to make a full week of runs to see it again. I am tempted to just remove the assertion. NB: this d

Re: [Gluster-devel] [FAILED] NetBSD-regression for ./tests/basic/afr/self-heald.t

2016-02-08 Thread Ravishankar N
On 02/08/2016 03:37 PM, Emmanuel Dreyfus wrote: On Mon, Feb 08, 2016 at 03:26:54PM +0530, Milind Changire wrote: https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14089/consoleFull [08:44:20] ./tests/basic/afr/self-heald.t .. not ok 37 Got "0" instead of "1" not ok 52 Got "0

Re: [Gluster-devel] Throttling xlator on the bricks

2016-02-07 Thread Ravishankar N
t wanted to run some tests and see if this is all we need at the moment to regulate shd traffic, especially with Richard's multi-threaded heal patch http://review.gluster.org/#/c/13329/ being revived and made ready for 3.8. -Ravi On Jan 27, 2016, at 9:48 PM, Ravishankar N wrote: On 01

Re: [Gluster-devel] patch #10954

2016-01-28 Thread Ravishankar N
On 01/28/2016 12:50 PM, Venky Shankar wrote: Yes, that should be good. Better to have just one version of the routine. Also, I think Ravi found a bug in brick_up_status() [or the _1 version?]. http://review.gluster.org/12913 fixed it upstream already. It wasn't sent to 3.7. I think the patch ht

Re: [Gluster-devel] Throttling xlator on the bricks

2016-01-27 Thread Ravishankar N
On 01/26/2016 08:41 AM, Richard Wareing wrote: In any event, it might be worth having Shreyas detail his throttling feature (that can throttle any directory hierarchy no less) to illustrate how a simpler design can achieve similar results to these more complicated (and it followsbug prone)

Re: [Gluster-devel] Throttling xlator on the bricks

2016-01-25 Thread Ravishankar N
e a global option. On Jan 25, 2016, at 12:29 AM, Venky Shankar wrote: On Mon, Jan 25, 2016 at 01:08:38PM +0530, Ravishankar N wrote: On 01/25/2016 12:56 PM, Venky Shankar wrote: Also, it would be beneficial to have the core TBF implementation as part of libglusterfs so as to be consuma

Re: [Gluster-devel] Throttling xlator on the bricks

2016-01-24 Thread Ravishankar N
On 01/25/2016 12:56 PM, Venky Shankar wrote: Also, it would be beneficial to have the core TBF implementation as part of libglusterfs so as to be consumable by the server side xlator component to throttle dispatched FOPs and for daemons to throttle anything that's outside "brick" boundary (such a

[Gluster-devel] Throttling xlator on the bricks

2016-01-24 Thread Ravishankar N
Hi, We are planning to introduce a throttling xlator on the server (brick) process to regulate FOPS. The main motivation is to solve complaints about AFR selfheal taking too much of CPU resources. (due to too many fops for entry self-heal, rchecksums for data self-heal etc.) The throttling is

Re: [Gluster-devel] Jenkins accounts for all devs.

2016-01-22 Thread Ravishankar N
N a écrit : On 01/14/2016 12:16 PM, Kaushal M wrote: On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur wrote: On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N < ravishan...@redhat.com> wrote: On 01/08/2016 12:03 PM, Raghavendra Talur wrote: P.S: Stop using the "universal"

Re: [Gluster-devel] Reverse brick order in tier volume- Why?

2016-01-22 Thread Ravishankar N
On 01/19/2016 06:44 PM, Ravishankar N wrote: 1) Is there is a compelling reason as to why the bricks of hot-tier are in the reverse order ? 2) If there isn't one, should we spend time to fix it so that the bricks appear in the order in which they were given at the time of volume crea

Re: [Gluster-devel] Jenkins accounts for all devs.

2016-01-21 Thread Ravishankar N
On 01/14/2016 12:16 PM, Kaushal M wrote: On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur wrote: On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N wrote: On 01/08/2016 12:03 PM, Raghavendra Talur wrote: P.S: Stop using the "universal" jenkins account to trigger jenkins build

Re: [Gluster-devel] Few details needed about *any* recent or upcoming feature

2016-01-20 Thread Ravishankar N
On 01/20/2016 04:11 PM, Niels de Vos wrote: Hi all, on Saturday the 30th of January I am scheduled to give a presentation titled "Gluster roadmap, recent improvements and upcoming features": https://fosdem.org/2016/schedule/event/gluster_roadmap/ I would like to ask from all feature owners/

[Gluster-devel] Reverse brick order in tier volume- Why?

2016-01-19 Thread Ravishankar N
Hello, When you perform a tier-attach, why are the bricks attached in the reverse order? For example: #gluster v create testvol replica 3 127.0.0.2:/home/ravi/bricks/brick{1..3} force #gluster volume tier testvol attach replica 3 127.0.0.2:/home/ravi/bricks/bric

Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-19 Thread Ravishankar N
On 01/19/2016 02:00 PM, Pranith Kumar Karampuri wrote: On 01/19/2016 01:57 PM, Emmanuel Dreyfus wrote: On Tue, Jan 19, 2016 at 09:38:19AM +0530, Ravishankar N wrote: ./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run: https://build.gluster.org/job/rackspace-regression

Re: [Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Ravishankar N
. yes, or " EXPECT_WITHIN $SOME_NEW_TIMEOUT 1 count_changelog_files $B0/${V0}1" Please correct my email id, you are addressing another Saravana :) Sorry, noted. :) Thanks, Saravana On 01/19/2016 09:42 AM, Venky Shankar wrote: Ravishankar N wrote: Hi Saravna, ./tests/bugs

[Gluster-devel] ENOSPC on slave21.cloud.gluster.org ?

2016-01-18 Thread Ravishankar N
https://build.gluster.org/job/glusterfs-devrpms-el7/6734/console -- Error Summary - Disk Requirements: At least 104MB more space needed on the / filesystem. + exit 1 Build step 'Execute shell' marked build as failure Archivin

[Gluster-devel] ./tests/bugs/changelog/bug-1208470.t failed NetBSD

2016-01-18 Thread Ravishankar N
Hi Saravna, ./tests/bugs/changelog/bug-1208470.t seems to have failed a NetBSD run: https://build.gluster.org/job/rackspace-regression-2GB-triggered/17651/consoleFull Not sure if it is spurious as it passed in the subsequent run. Please have a look. Thanks, Ravi ___

[Gluster-devel] Jenkins accounts for all devs. (Was Re: Gerrit review, submit type and Jenkins testing)

2016-01-13 Thread Ravishankar N
On 01/08/2016 12:03 PM, Raghavendra Talur wrote: P.S: Stop using the "universal" jenkins account to trigger jenkins build if you are not a maintainer. If you are a maintainer and don't have your own jenkins account then get one soon! I would request for a jenkins account for non-maintainers

Re: [Gluster-devel] NetBSD tests not running to completion.

2016-01-08 Thread Ravishankar N
On 01/08/2016 03:57 PM, Emmanuel Dreyfus wrote: On Fri, Jan 08, 2016 at 05:11:22AM -0500, Jeff Darcy wrote: [08:45:57] ./tests/basic/afr/arbiter-statfs.t .. [08:43:03] ./tests/basic/afr/arbiter-statfs.t .. [08:40:06] ./tests/basic/afr/arbiter-statfs.t .. [08:08:51] ./tests/basic/afr/arbiter-stat

Re: [Gluster-devel] NetBSD tests not running to completion.

2016-01-07 Thread Ravishankar N
On 01/08/2016 09:57 AM, Emmanuel Dreyfus wrote: I am a bit disturbed by the fact that people raise the "NetBSD regression ruins my life" issue without doing the work of listing the actual issues encountered. I already did earlier- the lack of infrastructure to even find out what caused the issue

Re: [Gluster-devel] NetBSD tests not running to completion.

2016-01-07 Thread Ravishankar N
On 01/07/2016 03:52 PM, Raghavendra Gowdappa wrote: Yes, the test that failed is "dd if=/dev/zero of=$N0/test-big-write >count=500 bs=1024k" >I don't know why. Did the test fail (with an error)? or was it hung? It failed with EIO. mount_nfs: can't access /patchy: Permission denied mount_nfs:

[Gluster-devel] NetBSD tests not running to completion.

2016-01-06 Thread Ravishankar N
I re triggered NetBSD regressions for http://review.gluster.org/#/c/13041/3 but they are being run in silent mode and are not completing. Can some one from the infra-team take a look? The last 22 tests in https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/ have failed. Highly

Re: [Gluster-devel] strace like utility for gluster

2016-01-03 Thread Ravishankar N
On 01/04/2016 11:16 AM, Pranith Kumar Karampuri wrote: Nice, I didn't know about this. What I am looking for with this tool is even more granularity. i.e. per xlator information. It shouldn't be so difficult to find information like time spent in each xlator. What fop from fuse lead to what oth

[Gluster-devel] ./tests/basic/tier/tier-snapshot.t fails Linux regressions on 3.7

2015-12-31 Thread Ravishankar N
Hi Dan, Joseph, I have added it to bad tests list in 3.7 as well @ http://review.gluster.org/#/c/13126/ since it is failing rather frequently in the regression runs. Some failures: https://build.gluster.org/job/rackspace-regression-2GB-triggered/17231/consoleFull https://build.gluster.org/job

Re: [Gluster-devel] 3.7.7 release

2015-12-24 Thread Ravishankar N
On 12/24/2015 04:01 PM, Pranith Kumar Karampuri wrote: hi, I am going to make 3.7.7 release early next week. Please make sure your patches are merged. If you have any patches that must go to 3.7.7. let me know. I will wait for them to be merged. I have added mine to the 3.7.7 tracker bug

[Gluster-devel] Lot of Netbsd regressions 'Waiting for the next available executor'

2015-12-23 Thread Ravishankar N
$subject. Since yesterday. The build queue is growing. Something's wrong. " If you see a little black clock icon in the build queue as shown below, it is an indication that your job is sitting in the queue unnecessarily." is what it says. ___ Glust

Re: [Gluster-devel] I've taken nbslave75.cloud.gluster.org offline

2015-12-18 Thread Ravishankar N
I have brought it back online. On 12/16/2015 12:31 PM, Ravishankar N wrote: $subject. tests/basic/afr/self-heal.t is constantly failing on this slave for http://review.gluster.org/#/c/12894/, I need to debug why. -Ravi ___ Gluster-devel

[Gluster-devel] Problems with debugging NetBSD failures.

2015-12-18 Thread Ravishankar N
[rant] Hi, We are observing many regression tests that fail only on NetBSD. Some of them genuine and expose problems in the patch while many are spurious. While I'm not saying that we do away with NetBSD, it is extremely painful to set up the jenkins slaves to do any meaningful debugging. I

Re: [Gluster-devel] Recognizing contributors and displaying other useful bits?

2015-12-16 Thread Ravishankar N
On 12/16/2015 07:36 PM, Niels de Vos wrote: Hi, Many GUI tools provide an "About" box that displays some information about the project. Some applications (Wireshark for example) go an extra step by including a list of all people that contributed patches. That is quite a nice way for contributors

[Gluster-devel] I've taken nbslave75.cloud.gluster.org offline

2015-12-15 Thread Ravishankar N
$subject. tests/basic/afr/self-heal.t is constantly failing on this slave for http://review.gluster.org/#/c/12894/, I need to debug why. -Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-d

Re: [Gluster-devel] Fwd: Netbsd failures on ./tests/basic/afr/arbiter-statfs.t

2015-12-10 Thread Ravishankar N
On 12/10/2015 06:28 PM, Ravishankar N wrote: I have re-triggered netBSD regression for http://review.gluster.org/#/c/12936/ Let's see if it fails. Thanks Emmanuel, it has run on nbslave74 and passed this time. Pranith, if the patch looks okay, please merge. --

Re: [Gluster-devel] Fwd: Netbsd failures on ./tests/basic/afr/arbiter-statfs.t

2015-12-10 Thread Ravishankar N
Original message From: m...@netbsd.org Date: To: Ravishankar N Cc: Vijay Bellur ,Susant Palai ,Gluster Devel Subject: Re: [Gluster-devel] Netbsd failures on ./tests/basic/afr/arbiter-statfs.t Ravishankar N wrote: > $ ls -l /dev/rvnd0d > crw-r- 1 root op

Re: [Gluster-devel] Netbsd failures on ./tests/basic/afr/arbiter-statfs.t

2015-12-10 Thread Ravishankar N
On 12/10/2015 04:23 PM, Emmanuel Dreyfus wrote: Something is rotten on this VM: vnconfig uses a VNDIOCGET ioctl on /dev/rvnd0d to obtain loopback device lists. If /dev/vnd0d is a character device with major 41, minor 3, then I would try to reboot. Otherwise first try to fix that by running MAKED

Re: [Gluster-devel] Netbsd failures on ./tests/basic/afr/arbiter-statfs.t

2015-12-10 Thread Ravishankar N
On 12/09/2015 08:47 PM, Vijay Bellur wrote: On 08/24/2015 07:01 AM, Susant Palai wrote: Ravi, The test case ./tests/basic/afr/arbiter-statfs.t failing frequently on netbsd machine. Requesting to take a look. tests/basic/afr/arbiter-statfs.t seems to be affecting most NetBSD runs now. Rav

Re: [Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Ravishankar N
I am adding the test case to bad tests for the moment @ http://review.gluster.org/#/c/12925/ . Makes me wonder if we can upgrade the build machines to at least centos7 if not fedora. 2.6 is really an old kernel! Thanks, Ravi On 12/09/2015 02:40 PM, Ravishankar N wrote: I'll take a lo

Re: [Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Ravishankar N
uster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- Ravishankar N work: +91 80 3924 5143 extension: 8373143 mobile: +91 96118 43905 irc nick: itisravi ___ Gluster-devel mailing list Gluster-

[Gluster-devel] NetBSD failure

2015-12-03 Thread Ravishankar N
Hi, arbiter.t failed a regression run at https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12248/consoleFull The test that failed is: ./tests/basic/afr/arbiter.t (Wstat: 0 Tests: 51 Failed: 1) Failed test: 10 which is TEST $CLI volume delete $V0` But looking at the logs, I f

Re: [Gluster-devel] NetBSD regression failures

2015-10-23 Thread Ravishankar N
+ gluster-infra All arbiter-statfs.t tests that are failing are on nbslave74.cloud.gluster.org. Loopback mounts are not happening on that slave. Perhaps it needs to be rebooted. -Ravi On 10/23/2015 01:47 PM, Nithya Balachandran wrote: Hi, NetBSD regression runs are failing in /tests/basic/

Re: [Gluster-devel] [Gluster-users] AFR arbiter volumes

2015-09-09 Thread Ravishankar N
On 09/09/2015 09:27 PM, David Gossage wrote: Once the volume is created as an Arbiter volume can it at a later time be changed to a replica 3 with all bricks containing data? This is not possible. At least not in a straight forward way. You can manually hack volfiles to remove arbiter specif

Re: [Gluster-devel] [Gluster-users] AFR arbiter volumes

2015-09-09 Thread Ravishankar N
On 09/09/2015 03:22 PM, wodel youchi wrote: Hi, I quite new on GlusterFS Since the arbiter does not hold any data, how to choose it's size compared to the other 2 bricks? Regards It depends on the maximum number of inodes (`df -i` ) any of the other 2 normal bricks can hold. The recom

Re: [Gluster-devel] AFR arbiter volumes

2015-09-08 Thread Ravishankar N
is just the term used to indicate the state of AFR changelog xattrs. If a brick is down and a write/ modification FOP happens, then the bricks that are up 'blame' the one that is down using the these xattrs. Thanks, Ravi Thanks Naga On 09-Sep-2015, at 7:17 am, Ravishankar N wrote

[Gluster-devel] AFR arbiter volumes

2015-09-08 Thread Ravishankar N
Sending out this mail for awareness/ feedback. - *What:** *Since glusterfs-3.7, AFR supports creation of arbiter volumes. These are a special type of replica 3 gluster volume where the 3rd brick is (always) configured

Re: [Gluster-devel] NetBSD builds failing

2015-09-07 Thread Ravishankar N
On 09/07/2015 02:45 PM, Emmanuel Dreyfus wrote: I nuked nbslae75:/build/install/var/log/glusterfs/*.tar.gz It should fix the problem. Thanks for that Emmanuel. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/li

[Gluster-devel] NetBSD builds failing

2015-09-07 Thread Ravishankar N
The last 15 odd regressions seem to have failed with the same error: Triggered by Gerrit:http://review.gluster.org/12109 in silent mode. Building remotely onnbslave75.cloud.gluster.org (netbsd7_regression) in workspace /home/jenk

Re: [Gluster-devel] Initial version of coreutils released

2015-08-13 Thread Ravishankar N
On 08/14/2015 03:49 AM, Craig Cabrey wrote: Hi everyone, I just pushed an initial version of the coreutils project that I have been working on during the course of this summer. There is still a lot to do, but I'm pretty excited about its start. Please check it out and if you want to help ou

Re: [Gluster-devel] semi-sync replication

2015-08-12 Thread Ravishankar N
just that we don't wait for all responses before unwinding to DHT ) failed on some bricks, the self-heal would take care of it.. Thanks, Ravi Replay cache may serve as a lifeline in such a scenario. Thanks -Anoop - Original Message ----- From: "Ravishankar N" To: "An

[Gluster-devel] Patch merge request-3.7 branch: http://review.gluster.org/#/c/11858/

2015-08-12 Thread Ravishankar N
Could some one with merge rights take http://review.gluster.org/#/c/11858/ in for the 3.7 branch? This backport has +2 from the maintainer and has passed regressions. Thanks in advance :-) Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] semi-sync replication

2015-08-12 Thread Ravishankar N
On 08/12/2015 12:50 PM, Anoop Nair wrote: Hi, Do we have plans to support "semi-synchronous" type replication in the future? By semi-sync I mean writing to one leg the replica, securing the write on a faster stable storage (capacitor backed SSD or NVRAM) and then acknowledge the client. The

Re: [Gluster-devel] broken test on release-3.7

2015-07-28 Thread Ravishankar N
On 07/28/2015 06:42 PM, Emmanuel Dreyfus wrote: On Tue, Jul 28, 2015 at 06:11:24PM +0530, Atin Mukherjee wrote: Its because of a missing backport. I've sent a patch [1] for the same. But how did the breakage got into the branch? Don't we have petty regression tests that should have caught it?

[Gluster-devel] Minutes of today's Gluster Community Meeting (2015-07-21)

2015-07-21 Thread Ravishankar N
(1) Regards, Ravi On 07/21/2015 04:38 PM, Ravishankar N wrote: Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 50 minutes)

2015-07-21 Thread Ravishankar N
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

Re: [Gluster-devel] [FAILED] regression tests: tests/bugs/distribute/bug-1066798.t, tests/basic/volume-snapshot.t

2015-07-20 Thread Ravishankar N
One more core for volume-snapshot.t: https://build.gluster.org/job/rackspace-regression-2GB-triggered/12605/consoleFull On 07/20/2015 03:00 PM, Raghavendra Talur wrote: Adding Susant and Avra for dht and snapshot test cases respectively. On Mon, Jul 20, 2015 at 11:45 AM, Milind Changire mail

Re: [Gluster-devel] Spurious failures in tests/basic/afr/arbiter.t

2015-07-20 Thread Ravishankar N
On 07/20/2015 12:45 PM, Niels de Vos wrote: On Mon, Jul 20, 2015 at 09:25:15AM +0530, Ravishankar N wrote: I'll take a look. Thanks. I'm actually not sure if this is a arbiter.t issue, maybe I blamed it too early? Its the first test that gets executed, and no others are tried after

Re: [Gluster-devel] Spurious failures in tests/basic/afr/arbiter.t

2015-07-19 Thread Ravishankar N
I'll take a look. Regards, Ravi On 07/20/2015 03:07 AM, Niels de Vos wrote: I have seen several occurences of failures in arbiter.t now. This is one of the errors: https://build.gluster.org/job/rackspace-regression-2GB-triggered/12626/consoleFull [21:20:20] ./tests/basic/afr/arbiter

Re: [Gluster-devel] Spurious failures again

2015-07-08 Thread Ravishankar N
On 07/08/2015 11:16 PM, Atin Mukherjee wrote: I think our linux regression is again unstable. I am seeing at least 10 such test cases ( if not more) which have failed. I think we should again start maintaining an etherpad page (probably the same earlier one) and keep track of them otherwise

Re: [Gluster-devel] Spurious failures again

2015-07-08 Thread Ravishankar N
On 07/08/2015 03:57 PM, Anuradha Talur wrote: - Original Message - From: "Kaushal M" To: "Gluster Devel" Sent: Wednesday, July 8, 2015 3:42:12 PM Subject: [Gluster-devel] Spurious failures again I've been hitting spurious failures in Linux regression runs for my change [1]. The fo

Re: [Gluster-devel] healing of bad objects (marked by scrubber)

2015-07-07 Thread Ravishankar N
On 07/08/2015 11:42 AM, Raghavendra Bhat wrote: Adding the correct gluster-devel id. Regards, Raghavendra Bhat On 07/08/2015 11:38 AM, Raghavendra Bhat wrote: Hi, In bit-rot feature, the scrubber marks the corrupted (objects whose data has gone bad) as bad objects (via extended attribute)

Re: [Gluster-devel] Mount hangs because of connection delays

2015-07-02 Thread Ravishankar N
On 07/02/2015 07:04 PM, Pranith Kumar Karampuri wrote: hi, When glusterfs mount process is coming up all cluster xlators wait for at least one event from all the children before propagating the status upwards. Sometimes client xlator takes upto 2 minutes to propogate this event(https://

<    1   2   3   >