Re: [Gluster-devel] Bugs summary jenkins job

2021-03-10 Thread Ravishankar N
+1. On 10/03/21 7:37 pm, Amar Tumballi wrote: I personally haven't checked it after migrating to GitHub. Haven't seen any PRs coming with bug reference either. IMO, ok to stop the job, and cleanup python2 reference. On Wed, 10 Mar, 2021, 7:21 pm Michael Scherer,

Re: [Gluster-devel] Removing problematic language in geo-replication

2020-12-30 Thread Ravishankar N
branch). Also, for any new PRs that we are sending/ reviewing/merging, we need to keep in mind not to re-introduce any offensive words. Wishing you all a happy new year! Ravi On 22/07/20 5:06 pm, Aravinda VK wrote: +1 On 22-Jul-2020, at 2:34 PM, Ravishankar N wrote: Hi, The gluster code

Re: [Gluster-devel] .glusterfs directory?

2020-12-21 Thread Ravishankar N
On 21/12/20 2:35 pm, Emmanuel Dreyfus wrote: On Mon, Dec 21, 2020 at 01:53:06PM +0530, Ravishankar N wrote: Are you talking about the entries inside.glusterfs/indices/xattrop/* ? Any stale entries here should automatically be purged when self-heal daemon as it crawls the folder periodically

Re: [Gluster-devel] .glusterfs directory?

2020-12-21 Thread Ravishankar N
On 21/12/20 1:16 pm, Emmanuel Dreyfus wrote: On a healthy system, one should definitely not remove any files or sub directories inside .glusterfs as they contain important metadata. Which entries specifically inside .glusterfs do you think are stale and why? There are indexes leading to no

Re: [Gluster-devel] .glusterfs directory?

2020-12-20 Thread Ravishankar N
On 21/12/20 7:10 am, Emmanuel Dreyfus wrote: Hello I have a lot of stale entries in bricks' .glusterfs directories. Is it safe to just rm -rf it and hope for automatic rebuild? Reading the source and experimenting, it does not seems obvious. Or is there a way to clean up stale entries that

Re: [Gluster-devel] Toggle storage.linux-aio and volume restart

2020-12-08 Thread Ravishankar N
On 09/12/20 10:39 am, Ravishankar N wrote: On 08/12/20 9:15 pm, Dmitry Antipov wrote: IOW if aio_configured is true, fops->readv and fops->writev should be set to posix_aio_readv() and posix_aio_writev(), respectively. But the whole picture looks like something in xlator graph si

Re: [Gluster-devel] Toggle storage.linux-aio and volume restart

2020-12-08 Thread Ravishankar N
On 08/12/20 9:15 pm, Dmitry Antipov wrote: IOW if aio_configured is true, fops->readv and fops->writev should be set to posix_aio_readv() and posix_aio_writev(), respectively. But the whole picture looks like something in xlator graph silently reverts fops->readv and fops->writev back to

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-17 Thread Ravishankar N
was intended. -Ravi Best Regards, Strahil Nikolov В вторник, 17 ноември 2020 г., 14:16:36 Гринуич+2, Ravishankar N написа: Hi Strahil, I would have imagined editing the 'Requires' section in glusterfs.spec.in would have sufficed. Do you need rpms though? A source install is not enoug

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-17 Thread Ravishankar N
resource-agents >= 4.1.0 (instead of 4.2.0) ? I 've replaced every occurance I found and still it tries to grab resource-agents 4.2 (which is not available on EL8). Best Regards, Strahil Nikolov В понеделник, 16 ноември 2020 г., 13:15:54 Гринуич+2, Ravishankar N написа: I

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread Ravishankar N
nly IPTABLES, while EL8 uses NFTABLES ... Best Regards, Strahil Nikolov В понеделник, 16 ноември 2020 г., 10:47:43 Гринуич+2, Yaniv Kaul написа: On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N wrote: On 15/11/20 8:24 pm, Strahil Nikolov wrote: Hello All, did anyone get a cha

Re: [Gluster-devel] [Gluster-users] Docs on gluster parameters

2020-11-16 Thread Ravishankar N
t; filed that gives a short description. HTH, Ravi Best Regards, Strahil Nikolov В понеделник, 16 ноември 2020 г., 10:36:09 Гринуич+2, Ravishankar N написа: On 14/11/20 3:23 am, Mahdi Adnan wrote: Hi,  Differently, the Gluster docs missing quite a bit regarding th

Re: [Gluster-devel] [Gluster-users] Docs on gluster parameters

2020-11-16 Thread Ravishankar N
On 14/11/20 3:23 am, Mahdi Adnan wrote: Hi,  Differently, the Gluster docs missing quite a bit regarding the available options that can be used in the volumes. Not only that, there are some options that might corrupt data and do not have proper documentation, for example, disable Sharding

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread Ravishankar N
On 15/11/20 8:24 pm, Strahil Nikolov wrote: Hello All, did anyone get a chance to look at  https://github.com/gluster/glusterfs/issues/1778 ? A look at https://review.gluster.org/#/c/glusterfs/+/23648/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c@1117 seems to indicate this could be due to

Re: [Gluster-devel] Experimental xlators? Where to find info about them

2020-11-12 Thread Ravishankar N
%20FS.pdf) Regards, Ravi Federico On 12/11/20 11:55, Ravishankar N wrote: On 12/11/20 4:18 pm, Federico Strati wrote: Hello, I'm looking for info on experimental xlators fdl and jbr and lex, where to find info about them? They were moved to https://github.com/gluster/glusterfs-xlators

Re: [Gluster-devel] Experimental xlators? Where to find info about them

2020-11-12 Thread Ravishankar N
On 12/11/20 4:18 pm, Federico Strati wrote: Hello, I'm looking for info on experimental xlators fdl and jbr and lex, where to find info about them? They were moved to https://github.com/gluster/glusterfs-xlators Thanks in advance Federico ___

Re: [Gluster-devel] On some (spurious) test failures

2020-10-27 Thread Ravishankar N
On 27/10/20 9:13 pm, Dmitry Antipov wrote: I've never had the following tests succeeded, neither via 'run-tests.sh' nor running manually with 'prove -vf': tests/basic/afr/entry-self-heal.t (Wstat: 0 Tests: 252 Failed: 2)   Failed tests:  104, 208 tests/basic/afr/entry-self-heal-anon-dir-off.t

Re: [Gluster-devel] Pull Request review workflow

2020-10-15 Thread Ravishankar N
On 15/10/20 4:36 pm, Sheetal Pamecha wrote: +1 Just a note to the maintainers who are merging PRs to have patience and check the commit message when there are more than 1 commits in PR. Makes sense. Another thing to consider is that rfc.sh script always does a rebase

[Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Ravishankar N
Hi, The gluster code base has some words and terminology (blacklist, whitelist, master, slave etc.) that can be considered hurtful/offensive to people in a global open source setting. Some of words can be fixed trivially but the Geo-replication code seems to be something that needs extensive

Re: [Gluster-devel] Help with smoke test failure

2020-07-17 Thread Ravishankar N
On 17/07/20 7:47 pm, Emmanuel Dreyfus wrote: Hello I am still stuck on this one: how should I address the missing SpecApproved and DocApproved here? On Fri, Jul 10, 2020 at 05:43:54PM +0200, Emmanuel Dreyfus wrote: What should I do to get this passed? One needs to add the appropriate

Re: [Gluster-devel] gluster safe ugpgrade

2020-01-20 Thread Ravishankar N
On 20/01/20 2:13 pm, Roman wrote: Hello dear devs team! I'm sorry to write to you, not user-s list, but I really would like to have DEV's opinion on my issue. I've got multiple solutions running on old gluster versions (same version per cluster, no mixed versions in same cluster): Some

Re: [Gluster-devel] [Gluster-Maintainers] Modifying gluster's logging mechanism

2019-11-22 Thread Ravishankar N
On 22/11/19 3:13 pm, Barak Sason Rofman wrote: This is actually one of the main reasons I wanted to bring this up for discussion - will it be fine with the community to run a dedicated tool to reorder the logs offline? I think it is a bad idea to log without ordering and later relying on an

Re: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Ravishankar N
On 24/08/19 9:26 AM, Yaniv Kaul wrote: I don't like mixed mode, but I also dislike Github's code review tools, so I'd like to remind the option of using http://gerrithub.io/ for code review. Other than that, I'm in favor of moving over. Y. +1 for using gerrithub for code review when we move

Re: [Gluster-devel] [RFC] What if client fuse process crash?

2019-08-06 Thread Ravishankar N
but no idea why they were not managed to merge into glusterfs mainline. Do you know why? Thanks, Changwei [1]: https://review.gluster.org/#/c/glusterfs/+/16843/ https://github.com/gluster/glusterfs/issues/242 On 2019/8/6 1:12 下午, Ravishankar N wrote: On 05/08/19 3:31 PM, Changwei Ge wrote: Hi

Re: [Gluster-devel] [RFC] What if client fuse process crash?

2019-08-05 Thread Ravishankar N
On 05/08/19 3:31 PM, Changwei Ge wrote: Hi list, If somehow, glusterfs client fuse process dies. All subsequent file operations will be failed with error 'no connection'. I am curious if the only way to recover is umount and mount again? Yes, this is pretty much the case with all fuse based

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-02 Thread Ravishankar N
On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: Hi All, In glusterfs, there is an issue regarding the fallocate behavior. In short, if someone does fallocate from the mount point with some size that is greater than the available size in the backend filesystem where the file is

Re: [Gluster-devel] [Gluster-users] No healing on peer disconnect - is it correct?

2019-06-10 Thread Ravishankar N
Adding people how can help you better with the heal part. @Karthik Subrahmanya  @Ravishankar N do take a look and answer this part. Is this behaviour correct? I mean No healing is triggered after peer is reconnected back and VMs. Thanks for explanation. BR! Martin

Re: [Gluster-devel] questions on callstubs and "link-count" in index translator

2019-04-27 Thread Ravishankar N
On 26/04/19 10:53 PM, Junsong Li wrote: Hello list, I have a couple of questions on index translator implementation. * Why does gluster need callstub and a different worker queue (and thread) to process those call stubs? Is it just to lower the priority of fops of internal inodes?

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-06 Thread Ravishankar N
Tracker bug is https://bugzilla.redhat.com/show_bug.cgi?id=1692394, in case anyone wants to add blocker bugs. On 05/04/19 8:03 PM, Shyam Ranganathan wrote: Hi, Expected tagging date for release-6.1 is on April, 10th, 2019. Please ensure required patches are backported and also are passing

Re: [Gluster-devel] [Gluster-users] Self/Healing process after node maintenance

2019-01-22 Thread Ravishankar N
On 01/22/2019 02:57 PM, Martin Toth wrote: Hi all, I just want to ensure myself how self-healing process exactly works, because I need to turn one of my nodes down for maintenance. I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per node (ZFS pool). All nodes running

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-08 Thread Ravishankar N
On 10/05/2018 08:29 PM, Shyam Ranganathan wrote: On 10/04/2018 11:33 AM, Shyam Ranganathan wrote: On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: RC1 would be around 24th of Sep. with final release tagging around 1st of Oct. RC1 now stands to be tagged tomorrow, and patches that are being

Re: [Gluster-devel] index_lookup segfault in glusterfsd brick process

2018-10-04 Thread Ravishankar N
On 10/04/2018 01:57 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 3, 2018 at 11:20 PM 김경표 > wrote: Hello folks. Few days ago I found my EC(4+2) volume was degraded. I am using 3.12.13-1.el7.x86_64. One brick was down, below is bricklog I am

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Ravishankar N
alli wrote: On Mon, Sep 17, 2018 at 10:00 AM, Ravishankar N wrote: On 09/13/2018 03:34 PM, Niels de Vos wrote: On Thu, Sep 13, 2018 at 02:25:22PM +0530, Ravishankar N wrote: ... What rules does clang impose on function/argument wrapping and alignment? I somehow found the new code wr

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-13 Thread Ravishankar N
On 09/12/2018 07:31 PM, Amar Tumballi wrote: Top posting: All is well at the tip of glusterfs master branch now. We will post a postmortem report of events and what went wrong in this activity, later. With this, Shyam, you can go ahead with release-v5.0 branching. -Amar On Wed, Sep 12,

Re: [Gluster-devel] Master branch lock down: RCA for tests (remove-brick-testcases.t)

2018-08-13 Thread Ravishankar N
On 08/13/2018 06:12 AM, Shyam Ranganathan wrote: As a means of keeping the focus going and squashing the remaining tests that were failing sporadically, request each test/component owner to, - respond to this mail changing the subject (testname.t) to the test name that they are responding to

Re: [Gluster-devel] Announcing Softserve- serve yourself a VM

2018-08-11 Thread Ravishankar N
On 02/28/2018 06:56 PM, Deepshikha Khandelwal wrote: Hi, We have launched the alpha version of  SOFTSERVE[1], which allows Gluster Github organization members to provision virtual machines for a specified duration of time. These machines will be deleted automatically afterwards. Now

Re: [Gluster-devel] Master branch lock down status (Fri, August 9th)

2018-08-10 Thread Ravishankar N
On 08/11/2018 07:29 AM, Shyam Ranganathan wrote: ./tests/bugs/replicate/bug-1408712.t (one retry) I'll take a look at this. But it looks like archiving the artifacts (logs) for this run (https://build.gluster.org/job/regression-on-demand-full-run/44/consoleFull) was a failure. Thanks,

[Gluster-devel] Are there daemons directly talking to arbiter bricks?

2018-08-09 Thread Ravishankar N
Hi, Arbiter brick does not store any data and fails any readv requests wound to it with a log message.  Any client talking to the bricks via AFR is safe because AFR takes care of not winding down any readv to arbiter bricks. But we have other daemons like bitd and scrubd talk directly to

Re: [Gluster-devel] Master branch lock down status

2018-08-08 Thread Ravishankar N
On 08/08/2018 05:07 AM, Shyam Ranganathan wrote: 5) Current test failures We still have the following tests failing and some without any RCA or attention, (If something is incorrect, write back). ./tests/basic/afr/add-brick-self-heal.t (needs attention) From the runs captured at

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Ravishankar N
On 07/24/2018 08:45 PM, Raghavendra Gowdappa wrote: I tried higher values of attribute-timeout and its not helping. Are there any other similar split brain related tests? Can I mark these tests bad for time being as  the md-cache patch has a deadline? `git grep split-brain-status ` on

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Ravishankar N
On 07/25/2018 09:06 AM, Raghavendra Gowdappa wrote: On Tue, Jul 24, 2018 at 6:54 PM, Ravishankar N <mailto:ravishan...@redhat.com>> wrote: On 07/24/2018 06:30 PM, Ravishankar N wrote: On 07/24/2018 02:56 PM, Raghavendra Gowdappa wrote: All, I was trying

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Ravishankar N
On 07/24/2018 06:30 PM, Ravishankar N wrote: On 07/24/2018 02:56 PM, Raghavendra Gowdappa wrote: All, I was trying to debug regression failures on [1] and observed that split-brain-resolution.t was failing consistently. = TEST 45 (line 88): 0

Re: [Gluster-devel] How gluster handle split-brain in the corner case from non-overlapping range lock in same file?

2018-05-13 Thread Ravishankar N
On 05/05/2018 10:04 PM, Yanfei Wang wrote: Hi, https://docs.gluster.org/en/v3/Administrator%20Guide/arbiter-volumes-and-quorum/, said, ``` There is a corner case even with replica 3 volumes where the file can end up in a split-brain. AFR usually takes range locks for the {offset, length} of

Re: [Gluster-devel] [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread Ravishankar N
Mabi, It looks like one of the patches is not a straight forward cherry-pick to the 3.12 branch. Even though the conflict might be easy to resolve, I don't think it is a good idea to hurry it for tomorrow. We will definitely have it ready by the next minor release (or if by chance the

Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-21 Thread Ravishankar N
On 03/20/2018 07:07 PM, Shyam Ranganathan wrote: On 03/12/2018 09:37 PM, Shyam Ranganathan wrote: Hi, As we wind down on 4.0 activities (waiting on docs to hit the site, and packages to be available in CentOS repositories before announcing the release), it is time to start preparing for the

Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Ravishankar N
On 03/13/2018 07:07 AM, Shyam Ranganathan wrote: Hi, As we wind down on 4.0 activities (waiting on docs to hit the site, and packages to be available in CentOS repositories before announcing the release), it is time to start preparing for the 4.1 release. 4.1 is where we have GD2 fully

Re: [Gluster-devel] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N
On 03/02/2018 11:04 AM, Anoop C S wrote: On Fri, 2018-03-02 at 10:11 +0530, Ravishankar N wrote: + Anoop. It looks like clients on the old (3.12) nodes are not able to talk to the upgraded (4.0) node. I see messages like these on the old clients: [2018-03-02 03:49:13.483458] W [MSGID

Re: [Gluster-devel] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N
On 03/02/2018 10:11 AM, Ravishankar N wrote: + Anoop. It looks like clients on the old (3.12) nodes are not able to talk to the upgraded (4.0) node. I see messages like these on the old clients:  [2018-03-02 03:49:13.483458] W [MSGID: 114007] [client-handshake.c:1197:client_setvolume_cbk

Re: [Gluster-devel] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N
-lk-version' in the options Is there something more to be done on BZ 1544366? -Ravi On 03/02/2018 08:44 AM, Ravishankar N wrote: On 03/02/2018 07:26 AM, Shyam Ranganathan wrote: Hi Pranith/Ravi, So, to keep a long story short, post upgrading 1 node in a 3 node 3.13 cluster, self-heal

Re: [Gluster-devel] Release 4.0: Unable to complete rolling upgrade tests

2018-03-01 Thread Ravishankar N
On 03/02/2018 07:26 AM, Shyam Ranganathan wrote: Hi Pranith/Ravi, So, to keep a long story short, post upgrading 1 node in a 3 node 3.13 cluster, self-heal is not able to catch the heal backlog and this is a very simple synthetic test anyway, but the end result is that upgrade testing is

Re: [Gluster-devel] use-case for 4 replicas and 1 arbiter

2018-02-12 Thread Ravishankar N
On 02/12/2018 05:02 PM, Niels de Vos wrote: Hi Ravi, Last week I was in a discussion about 4-way replication and one arbiter (5 bricks per set). It seems that it is not possible to create this configuration through the CLI. What would it take to make this available? The most important changes

Re: [Gluster-devel] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Ravishankar N
On 02/10/2018 01:24 AM, Shyam Ranganathan wrote: On 02/02/2018 10:26 AM, Ravishankar N wrote: 2) "Replace MD5 usage to enable FIPS support" - Ravi, Amar + Kotresh who has done most (all to be precise) of the patches listed in https://github.com/gluster/glusterfs/issues/230  in cas

Re: [Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-08 Thread Ravishankar N
test, when sn-0 side file/dir does not has “dirty” and “trusted.afr.export-client-*” attribute and sn-1 side file/dir has both “dirty” and “trusted.afr.export-client-*” non-zero. The gluster could self heal such scenario. But in this case the it could never self heal. *From:*Ravishankar N

Re: [Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-07 Thread Ravishankar N
using http://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ If you want to prevent split-brain, you would need to use replica 3 or arbiter volume. Regards, Ravi *From:*Ravishankar N [mailto:ravishan...@redhat.com] *Sent:* Thursday, February 08, 2018 12:12 AM *To:* Zhou

Re: [Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-07 Thread Ravishankar N
On 02/07/2018 10:39 AM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote: Hi glusterfs expert: Good day. Lately, we meet a glusterfs split brain problem in our env in /mnt/export/testdir. We start 3 ior process (IOR tool) from non-sn nodes, which is creating/removing files repeatedly in testdir.

Re: [Gluster-devel] Release 4.0: Release notes (please read and contribute)

2018-02-02 Thread Ravishankar N
On 02/01/2018 11:02 PM, Shyam Ranganathan wrote: On 01/29/2018 05:10 PM, Shyam Ranganathan wrote: Hi, I have posted an initial draft version of the release notes here [1]. I would like to *suggest* the following contributors to help improve and finish the release notes by 06th Feb, 2017. As

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N
On 01/19/2018 06:19 AM, Shyam Ranganathan wrote: On 01/18/2018 07:34 PM, Ravishankar N wrote: On 01/18/2018 11:53 PM, Shyam Ranganathan wrote: On 01/02/2018 11:08 AM, Shyam Ranganathan wrote: Hi, As release 3.13.1 is announced, here is are the needed details for 3.13.2 Release date: 19th

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-18 Thread Ravishankar N
On 01/18/2018 11:53 PM, Shyam Ranganathan wrote: On 01/02/2018 11:08 AM, Shyam Ranganathan wrote: Hi, As release 3.13.1 is announced, here is are the needed details for 3.13.2 Release date: 19th Jan, 2018 (20th is a Saturday) Heads up, this is tomorrow. Tracker bug for blockers:

Re: [Gluster-devel] query about why glustershd can not afr_selfheal_recreate_entry because of "afr: Prevent null gfids in self-heal entry re-creation"

2018-01-16 Thread Ravishankar N
] *On Behalf Of *Ravishankar N *Sent:* Tuesday, January 16, 2018 1:44 PM *To:* Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; Gluster Devel <gluster-devel@gluster.org> *Subject:* Re: [Gluster-devel] query about why glustershd can not afr_selfheal_recreate_entry because of &

Re: [Gluster-devel] query about why glustershd can not afr_selfheal_recreate_entry because of "afr: Prevent null gfids in self-heal entry re-creation"

2018-01-15 Thread Ravishankar N
+ gluster-devel On 01/15/2018 01:41 PM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote: Hi glusterfs expert,     Good day,     When I do some test about glusterfs self-heal I find following prints showing when dir/file type get error it cannot get self-healed. *Could you help to check if it

[Gluster-devel] Delete stale gfid entries during lookup

2017-12-28 Thread Ravishankar N
Hi, https://review.gluster.org/#/c/19070/2 modifies posix_lookup() to remove stale .glusterfs entry during gfid (nameless) lookup. The initial version (v1) of the patch attempted to remove it in posix_symlink in order to fix BZ 1529488 but the in the review discussions (see the patch), it was

[Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-14 Thread Ravishankar N
...for a lot of patches on master .The crash is in volume set; the .t just does a volume set help. Can the glusterd devs take a look as it is blocking merging patches? I have raised BZ 1526268 with the details. Thanks! Ravi ___ Gluster-devel

Re: [Gluster-devel] Tests failing on Centos 7

2017-11-27 Thread Ravishankar N
On 11/27/2017 07:12 PM, Nigel Babu wrote: Hello folks, I have an update on chunking. There's good news and bad. The first bit is that We a chunked regression job now. It splits it out into 10 chunks that are run in parallel. This chunking is quite simple at the moment and doesn't try to be

[Gluster-devel] ./tests/basic/ec/ec-4-1.t failed

2017-11-23 Thread Ravishankar N
...for my patch https://review.gluster.org/#/c/18791/ which only has AFR fixes.  Log is at https://build.gluster.org/job/centos6-regression/7616/console . Request EC folks to take a look. Thanks, Ravi ___ Gluster-devel mailing list

Re: [Gluster-devel] Test cases failing on X86

2017-11-17 Thread Ravishankar N
On 11/17/2017 05:34 PM, Vaibhav Vaingankar wrote: Hi, I was executing test cases on x86 Ubuntu:16.04 VM, however I found following test cases are consistently failing. are they expected failure? or something is missing? following are the build steps I used: apt-get install make

Re: [Gluster-devel] AFR: Fail lookups when quorum not met

2017-10-09 Thread Ravishankar N
On 09/22/2017 07:27 PM, Niels de Vos wrote: On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote: Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick

Re: [Gluster-devel] brick multiplexing regression is broken

2017-10-06 Thread Ravishankar N
The test is failing on master without any patches: [root@tuxpad glusterfs]# prove tests/bugs/bug-1371806_1.t tests/bugs/bug-1371806_1.t .. 7/9 setfattr: ./tmp1: No such file or directory setfattr: ./tmp2: No such file or directory setfattr: ./tmp3: No such file or directory setfattr: ./tmp4:

Re: [Gluster-devel] brick multiplexing regression is broken

2017-10-06 Thread Ravishankar N
, Oct 6, 2017 at 11:04 AM, Ravishankar N <ravishan...@redhat.com <mailto:ravishan...@redhat.com>> wrote: The test is failing on master without any patches: [root@tuxpad glusterfs]# prove tests/bugs/bug-1371806_1.t tests/bugs/bug-1371806_1.t .. 7/9 setfattr: ./tmp1:

[Gluster-devel] AFR: Fail lookups when quorum not met

2017-09-22 Thread Ravishankar N
Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick whenever possible, but if there is none, we just serve the lookup from one of the bricks that we got a positive

Re: [Gluster-devel] [Gluster-users] [Gluster-infra] lists.gluster.org issues this weekend

2017-09-22 Thread Ravishankar N
Hello, Are our servers still facing the overload issue? My replies to gluster-users ML are not getting delivered to the list. Regards, Ravi On 09/19/2017 10:03 PM, Michael Scherer wrote: Le samedi 16 septembre 2017 à 20:48 +0530, Nigel Babu a écrit : Hello folks, We have discovered that for

Re: [Gluster-devel] How commonly applications make use of fadvise?

2017-08-11 Thread Ravishankar N
On 08/11/2017 04:51 PM, Niels de Vos wrote: On Fri, Aug 11, 2017 at 12:47:47AM -0400, Raghavendra Gowdappa wrote: Hi all, In a conversation between me, Milind and Csaba, Milind pointed out fadvise(2) [1] and its potential benefits to Glusterfs' caching translators like read-ahead etc. After

Re: [Gluster-devel] upstream regression suite is broken

2017-07-06 Thread Ravishankar N
I've sent a fix @ https://review.gluster.org/#/c/17721 On 07/07/2017 09:51 AM, Atin Mukherjee wrote: Krutika, tests/basis/stats-dump.t is failing all the time and as per my initial analysis after https://review.gluster.org/#/c/17709/ got into the mainline the failures are seen and reverting

Re: [Gluster-devel] reagarding backport information while porting patches

2017-06-22 Thread Ravishankar N
On 06/23/2017 09:15 AM, Pranith Kumar Karampuri wrote: hi, Now that we are doing backports with same Change-Id, we can find the patches and their backports both online and in the tree without any extra information in the commit message. So shall we stop adding text similar to: >

Re: [Gluster-devel] Support for statx in 4.0

2017-06-14 Thread Ravishankar N
On 06/12/2017 01:21 PM, Vijay Bellur wrote: Hey All, Linux 4.11 has added support for a new system call, statx [1]. statx provides more information than what stat() does today. Given that there could be potential users for this new interface it would be nice to have statx supported in 4.0.

Re: [Gluster-devel] [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-17 Thread Ravishankar N
On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote: + gluster-devel On Wed, May 17, 2017 at 10:50 PM, mabi > wrote: I don't know exactly what kind of context-switches it was but what I know is that it is the "cs" number under "system"

Re: [Gluster-devel] tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - regression failures

2017-05-17 Thread Ravishankar N
? On Mon, May 15, 2017 at 10:52 AM, Ravishankar N <ravishan...@redhat.com <mailto:ravishan...@redhat.com>> wrote: On 05/12/2017 03:33 PM, Atin Mukherjee wrote: |tests/basic/afr/add-brick-self-heal.t| <http://git.gluster.org/cgit/glusterfs.git/tree/tests/basic/afr

Re: [Gluster-devel] tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - regression failures

2017-05-14 Thread Ravishankar N
On 05/12/2017 03:33 PM, Atin Mukherjee wrote: |tests/basic/afr/add-brick-self-heal.t| is the 2nd in the list. All failures (https://fstat.gluster.org/weeks/1/failure/2) are in netbsd and looks like an

Re: [Gluster-devel] tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - regression failures

2017-05-14 Thread Ravishankar N
On 05/14/2017 10:05 PM, Atin Mukherjee wrote: On Fri, May 12, 2017 at 3:51 PM, Karthik Subrahmanya > wrote: Hey Atin, I had a look at "tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t". The test case passes in

Re: [Gluster-devel] stat() returns invalid file size when self healing

2017-04-12 Thread Ravishankar N
On 04/12/2017 01:57 PM, Mateusz Slupny wrote: Hi, I'm observing strange behavior when accessing glusterfs 3.10.0 volume through FUSE mount: when self-healing, stat() on a file that I know has non-zero size and is being appended to results in stat() return code 0, and st_size being set to 0

Re: [Gluster-devel] glusterd regression failure on centos

2017-03-22 Thread Ravishankar N
On 03/22/2017 11:54 PM, Atin Mukherjee wrote: Please file a bug in project-infra in gluster asking for a centos slave machine to debug the issue further and Nigel should be able to assist you on that. On Wed, 22 Mar 2017 at 13:55, Gaurav Yadav >

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-03-03 Thread Ravishankar N
On 03/03/2017 07:23 PM, Shyam wrote: On 03/03/2017 06:44 AM, Prashanth Pai wrote: On 02/28/2017 08:47 PM, Shyam wrote: We should be transitioning to using github for feature reporting and tracking, more fully from this release. So once again, if there exists any confusion on that front,

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-03-03 Thread Ravishankar N
On 02/28/2017 08:47 PM, Shyam wrote: We should be transitioning to using github for feature reporting and tracking, more fully from this release. So once again, if there exists any confusion on that front, reach out to the lists for clarification. I see that there was a discussion on this on

Re: [Gluster-devel] Logging in a multi-brick daemon

2017-02-15 Thread Ravishankar N
On 02/16/2017 04:09 AM, Jeff Darcy wrote: One of the issues that has come up with multiplexing is that all of the bricks in a process end up sharing a single log file. The reaction from both of the people who have mentioned this is that we should find a way to give each brick its own log

Re: [Gluster-devel] gluster source code help

2017-02-03 Thread Ravishankar N
On 02/03/2017 09:14 AM, jayakrishnan mm wrote: On Thu, Feb 2, 2017 at 8:17 PM, Ravishankar N <ravishan...@redhat.com <mailto:ravishan...@redhat.com>> wrote: On 02/02/2017 10:46 AM, jayakrishnan mm wrote: Hi How do I determine, which part of the code is run on th

Re: [Gluster-devel] gluster source code help

2017-02-02 Thread Ravishankar N
On 02/02/2017 10:46 AM, jayakrishnan mm wrote: Hi How do I determine, which part of the code is run on the client, and which part of the code is run on the server nodes by merely looking at the the glusterfs source code ? I knew there are client side and server side translators which

Re: [Gluster-devel] Patches being posted by Facebook and plans thereof

2016-12-22 Thread Ravishankar N
On 12/22/2016 11:31 AM, Shyam wrote: 1) Facebook will port all their patches to the special branch release-3.8-fb, where they have exclusive merge rights. i) I see that the Bugzilla IDs they are using for these patches are the same as the BZ ID of the corresponding 3.8 branch patches. These

Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Ravishankar N
On 11/14/2016 05:57 PM, Atin Mukherjee wrote: This would be a straight forward thing to implement at glusterd, anyone up for it? If not, we will take this into consideration for GlusterD 2.0. On Mon, Nov 14, 2016 at 10:28 AM, Mohammed Rafi K C

Re: [Gluster-devel] [Gluster-users] Hole punch support

2016-11-11 Thread Ravishankar N
+ gluster-devel. Can you raise an RFE bug for this and assign it to me? The thing is, FALLOC_FL_PUNCH_HOLE must be used in tandem with FALLOC_FL_KEEP_SIZE, and the latter is currently broken in gluster because there are some conversions done in iatt_from_stat() in gluster for quota to work.

[Gluster-devel] Preventing lookups from serving metadata.

2016-11-08 Thread Ravishankar N
So there is a class of bugs* in exposed in replicate volumes where if the only good copy of the file is down, we still end up serving stale data to the application because of caching in various layers outside gluster. In fuse, this can be mitigated by setting attribute and entry-timeout to zero

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-02 Thread Ravishankar N
On 10/03/2016 06:58 AM, Pranith Kumar Karampuri wrote: On Mon, Oct 3, 2016 at 6:41 AM, Pranith Kumar Karampuri <pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote: On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N <ravishan...@redhat.com <mailto:ravishan...@

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-09-30 Thread Ravishankar N
On 09/30/2016 06:38 PM, Niels de Vos wrote: On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote: hi, At the moment 'Reviewed-by' tag comes only if a +1 is given on the final version of the patch. But for most of the patches, different people would spend time on

Re: [Gluster-devel] [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Ravishankar N
On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote: Hi, This an update to the previous mail about Fine graining of the GlusterFS upstream bugzilla components. Finally we have come out a new structure that would help in easy access of the bug for reporter and assignee too. In the new structure

Re: [Gluster-devel] logs/cores for smoke failures

2016-09-26 Thread Ravishankar N
On 09/27/2016 09:36 AM, Pranith Kumar Karampuri wrote: hi Nigel, Is there already a bug to capture these in the runs when failures happen? I am not able to understand why this failure happened: https://build.gluster.org/job/smoke/30843/console, logs/cores would have helped. Let me know

Re: [Gluster-devel] make install again compiling source

2016-09-19 Thread Ravishankar N
On 09/19/2016 05:07 PM, Avra Sengupta wrote: Hi, I ran "make -j" on the latest master, followed by make install. The make install, by itself is doing a fresh compile every time (and totally ignoring the make i did before it). Is there any recent change, which would cause this. Thanks.

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-06 Thread Ravishankar N
: 4.1.6-200.fc22.x86_64 ). By the way 3.3 is a rather old version, you might want to use the latest 3.8.x release. At 2016-09-06 12:35:19, "Ravishankar N" <ravishan...@redhat.com> wrote: That is strange. I tried the experiment on a volume with a million files. The clie

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-05 Thread Ravishankar N
as changed about 4GB! 发自 网易邮箱大师 <http://u.163.com/signature> On 09/02/2016 09:45, Ravishankar N <mailto:ravishan...@redhat.com> wrote: On 09/02/2016 05:42 AM, Keiviw wrote: Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the nodeB, it didn't cach

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N
/entry timeout values. Make sure your volume has a lot of small files. -Ravi 在 2016-09-01 16:37:00,"Ravishankar N" <ravishan...@redhat.com> 写道: On 09/01/2016 01:04 PM, Keiviw wrote: Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metada

[Gluster-devel] Fwd: [Gluster-users] bug-upcall-stat.t always fails on master

2016-09-01 Thread Ravishankar N
Sorry sent it to users instead of devel. I'll show myself out. Forwarded Message Subject:[Gluster-users] bug-upcall-stat.t always fails on master Date: Thu, 1 Sep 2016 22:41:53 +0530 From: Ravishankar N <ravishan...@redhat.com> To: gluster-us...@glust

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N
On 09/01/2016 01:04 PM, Keiviw wrote: Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metadata like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to /mnt/glusterfs

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Ravishankar N
On 08/26/2016 09:39 PM, Pranith Kumar Karampuri wrote: On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri > wrote: hi, Now that we are almost near the feature freeze date (31st of Aug), want to get a sense if any of

Re: [Gluster-devel] [Gluster-users] CFP for Gluster Developer Summit

2016-08-23 Thread Ravishankar N
Hello, Here is a proposal I'd like to make. Title: Throttling in gluster (https://github.com/gluster/glusterfs-specs/blob/master/accepted/throttling.md) Theme: Performance and scalability. The talk/ discussion will be focused on server side throttling of FOPS, using a throttling translator.

  1   2   3   >