Re: [Gluster-devel] Is it safe to lower SHARD_MIN_BLOCK_SIZE?

2020-04-14 Thread Krutika Dhananjay
So SHARD_MIN_BLOCK_SIZE is currently hardcoded to 4MB. If you want to reduce that value further (purely for the sake of testing, not recommended otherwise), you will need to change its value in the source code here -

[Gluster-devel] More intelligent file distribution across subvols of DHT when file size is known

2019-05-22 Thread Krutika Dhananjay
Hi, I've proposed a solution to the problem of space running out in some children of DHT even when its other children have free space available, here - https://github.com/gluster/glusterfs/issues/675. The proposal aims to solve a very specific instance of this generic class of problems where

Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-05-21 Thread Krutika Dhananjay
explain what remote-dio and strict-o-direct variables changed in > behaviour of my Gluster? It would be great for later archive/users to > understand what and why this solved my issue. > > Anyway, Thanks a LOT!!! > > BR, > Martin > > On 13 May 2019, at 10:20, Krutika Dhananjay wr

Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
/send profiling info after some VM will be failed. I > suppose this is correct profiling strategy. > About this, how many vms do you need to recreate it? A single vm? Or multiple vms doing IO in parallel? > Thanks, > BR! > Martin > > On 13 May 2019, at 09:21, Kr

Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
Also, what's the caching policy that qemu is using on the affected vms? Is it cache=none? Or something else? You can get this information in the command line of qemu-kvm process corresponding to your vm in the ps output. -Krutika On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay wrote: > W

Re: [Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
What version of gluster are you using? Also, can you capture and share volume-profile output for a run where you manage to recreate this issue? https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command Let me know if you have any

Re: [Gluster-devel] Shard test failing more commonly on master

2018-12-19 Thread Krutika Dhananjay
Sent https://review.gluster.org/c/glusterfs/+/21889 to fix the original issue. -Krutika On Wed, Dec 5, 2018 at 10:58 AM Atin Mukherjee wrote: > We can't afford to keep a bad test hanging for more than a day which > penalizes other fixes to be blocked (I see atleast 4-5 more patches failed >

Re: [Gluster-devel] Release 5: GA and what are we waiting on

2018-10-11 Thread Krutika Dhananjay
On Thu, Oct 11, 2018 at 8:55 PM Shyam Ranganathan wrote: > So we are through with a series of checks and tasks on release-5 (like > ensuring all backports to other branches are present in 5, upgrade > testing, basic performance testing, Package testing, etc.), but still > need the following

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-11 Thread Krutika Dhananjay
On Wed, Oct 10, 2018 at 8:30 PM Shyam Ranganathan wrote: > The following options were added post 4.1 and are part of 5.0 as the > first release for the same. They were added in as part of bugs, and > hence looking at github issues to track them as enhancements did not > catch the same. > > We

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Krutika Dhananjay
consistently reproducible. I am still debugging this to see which >> patch caused this. >> >> regards, >> Nithya >> >> >> On 2 August 2018 at 07:13, Atin Mukherjee >> wrote: >> >>> >>> >>> On Thu, 2 Aug 2018 at 07:05, Susant Pal

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Krutika Dhananjay
Same here - https://build.gluster.org/job/centos7-regression/2024/console -Krutika On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee wrote: > tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times) > running with master branch. As per my knowledge I've not seen this test > failing

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Krutika Dhananjay
We do already have a way to get inodelk and entrylk count from a bunch of fops, introduced in http://review.gluster.org/10880. Can you check if you can make use of this feature? -Krutika On Wed, Jun 20, 2018 at 9:17 AM, Amar Tumballi wrote: > > > On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra

Re: [Gluster-devel] 3.12 Review Request

2017-07-24 Thread Krutika Dhananjay
On Mon, Jul 24, 2017 at 3:54 PM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi, > > Following patches are targeted for 3.12. It has undergone few reviews and > yet it > to merged. Please take some time to review and merge if it looks good. > >

[Gluster-devel] Regarding positioning of nl-cache in gluster client stack

2017-07-17 Thread Krutika Dhananjay
Hi Poornima and Pranith, I see that currently glusterd loads nl-cache between stat-prefetch and open-behind on the client stack. Were there any specific considerations for selecting this position for nl-cache? I was interested to see the performance impact of loading this translator between

Re: [Gluster-devel] upstream regression suite is broken

2017-07-07 Thread Krutika Dhananjay
The patch[1] that introduced tests/basic/stats-dump.t was merged in October 2015 and my patch underwent (and passed too![2]) centos regression tests, including stats-dump.t on 05 June, 2017. The only change that the test script underwent during this time was this line in 2016, which is harmless:

Re: [Gluster-devel] [Questions] why shard feature doesn't support quota

2017-06-29 Thread Krutika Dhananjay
This is based on my high-level understanding of quota in gluster - supporting quota on sharded volumes will require us to make quota xlator account for shards residing under the hidden ".shard" directory per file and adding this to the quota-xattr representing aggregated consumed size on the

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-20 Thread Krutika Dhananjay
Apologies. Pressed 'send' even before I was done. On Tue, Jun 20, 2017 at 11:39 AM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Some update on this topic: > > I ran fio again, this time with Raghavendra's epoll-rearm patch @ > https://review.gluster.org/17391 > > Th

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-20 Thread Krutika Dhananjay
throughput here is > far too low for that to be the case. > > -- Manoj > > > On Thu, Jun 8, 2017 at 6:37 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > >> Indeed the latency on the client side dropped with iodepth=1. :) >> I ran the test twice and the r

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Krutika Dhananjay
from the client-side, > that would strengthen the hypothesis than serialization/contention among > concurrent requests at the n/w layers is the root cause here. > > -- Manoj > > > On Thu, Jun 8, 2017 at 11:46 AM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: >

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Krutika Dhananjay
, Raghavendra G <raghaven...@gluster.com> wrote: > > > On Wed, Jun 7, 2017 at 11:59 AM, Xavier Hernandez <xhernan...@datalab.es> > wrote: > >> Hi Krutika, >> >> On 06/06/17 13:35, Krutika Dhananjay wrote: >> >>> Hi, >>> >>>

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Krutika Dhananjay
bandwidth? How much of the network > bandwidth is in use while the test is being run? Wondering if there is > saturation in the network layer. > > -Vijay > > On Tue, Jun 6, 2017 at 7:35 AM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > >> Hi, >> >>

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Krutika Dhananjay
/fio_6 filename=/perf7/iotest/fio_7 filename=/perf8/iotest/fio_8 I have 3 vms reading from one mount, and each of these vms is running the above job in parallel. -Krutika On Tue, Jun 6, 2017 at 9:14 PM, Manoj Pillai <mpil...@redhat.com> wrote: > > > On Tue, Jun 6, 2017 at 5

[Gluster-devel] Performance experiments with io-stats translator

2017-06-06 Thread Krutika Dhananjay
Hi, As part of identifying performance bottlenecks within gluster stack for VM image store use-case, I loaded io-stats at multiple points on the client and brick stack and ran randrd test using fio from within the hosted vms in parallel. Before I get to the results, a little bit about the

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-31 Thread Krutika Dhananjay
back. -Krutika On Wed, May 31, 2017 at 3:08 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > > > On Tue, May 30, 2017 at 6:42 PM, Shyam <srang...@redhat.com> wrote: > >> On 05/30/2017 05:28 AM, Krutika Dhananjay wrote: >> >>> You're rig

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-31 Thread Krutika Dhananjay
On Tue, May 30, 2017 at 6:42 PM, Shyam <srang...@redhat.com> wrote: > On 05/30/2017 05:28 AM, Krutika Dhananjay wrote: > >> You're right. With brick graphs, this will be a problem. >> >> Couple of options: >> >> 1. To begin with we identify points where

Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-30 Thread Krutika Dhananjay
Xavi and I would like to propose transaction framework for 4.0 as a stretch goal. -Krutika On Tue, May 16, 2017 at 6:16 AM, Shyam wrote: > Hi, > > Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been > quite a few discussions around this in various

Re: [Gluster-devel] Volgen support for loading trace and io-stats translators at specific points in the graph

2017-05-30 Thread Krutika Dhananjay
-profile start command for the brief period where they want to capture the stats and disable it as soon as they're done. Let me know what you think. -Krutika On Fri, May 26, 2017 at 9:19 PM, Shyam <srang...@redhat.com> wrote: > On 05/26/2017 05:44 AM, Krutika Dhananjay wrote

Re: [Gluster-devel] Enabling shard on EC

2017-05-05 Thread Krutika Dhananjay
Hi, Work is in progress for this (and going a bit slow at the moment because of other priorities). At the moment we support sharding only for VM image store use-case - most common large file + single writer use case we know of. Just curious, what is the use case where you want to use shard+EC?

[Gluster-devel] tests/bitrot/br-state-check.t dumping core on NetBSD

2017-04-10 Thread Krutika Dhananjay
Hi Kotresh, The test tests/bitrot/br-state-check.t is dumping core consistently in release-3.8 branch in NetBSD on a patch in sharding, which is never enabled throughout the test. Here are the two failed runs: https://build.gluster.org/job/netbsd7-regression/3506/consoleFull

Re: [Gluster-devel] ./tests/bitrot/bug-1373520.t failure on master

2017-02-28 Thread Krutika Dhananjay
Found the bug. Please see https://review.gluster.org/#/c/16462/5/xlators/storage/posix/src/posix-handle.c@977 Will be posting the fix in some time. -Krutika On Tue, Feb 28, 2017 at 5:45 PM, Atin Mukherjee wrote: > > > On Tue, Feb 28, 2017 at 5:10 PM, Atin Mukherjee

Re: [Gluster-devel] [POC] disaster recovery: reconstruct all shards

2017-02-26 Thread Krutika Dhananjay
It should be possible to write a script that stitches the different pieces of a single file together (although with a few caveats). -Krutika On Sun, Feb 26, 2017 at 8:52 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Would be possible to add a command to use in case of

Re: [Gluster-devel] [Gluster-users] Self heal files

2017-02-14 Thread Krutika Dhananjay
And for replicate-1, it would be priv->pending_key[0] = "trusted.afr.dis-rep-client-3" priv->pending_key[1] = "trusted.afr.dis-rep-client-4" priv->pending_key[2] = "trusted.afr.dis-rep-client-5" HTH, Krutika On Tue, Feb 14, 2017 at 4:58 PM, jayakrishnan mm

Re: [Gluster-devel] [Gluster-users] Self heal files

2017-02-14 Thread Krutika Dhananjay
On Tue, Feb 14, 2017 at 1:01 PM, jayakrishnan mm <jayakrishnan...@gmail.com> wrote: > > > > > > On Mon, Feb 13, 2017 at 7:07 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > >> Hi JK, >> >> On Mon, Feb 13, 2017 at 1:06 PM, jayakrishn

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2017-01-05 Thread Krutika Dhananjay
his issue, do you have any new update? > > Cw > > On Fri, Dec 23, 2016 at 1:05 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > > Perfect. That's what I needed to know. Thanks! :) > > > > -Krutika > > > > On Fri, Dec 23, 2016 at 7:15 AM, qingwe

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-22 Thread Krutika Dhananjay
el_init (old=0x7f5d6c003d10) at ../../../../libglusterfs/src/ > list.h:87 > 87old->prev->next = old->next; > (gdb) select-frame 3 > (gdb) print local->fop > $1 = GF_FOP_WRITE > (gdb) > > Hopefully is useful for your investigation. > > Thanks. >

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-21 Thread Krutika Dhananjay
PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Thanks for this. The information seems sufficient at the moment. > Will get back to you on this if/when I find something. > > -Krutika > > On Mon, Dec 19, 2016 at 1:44 PM, qingwei wei <tcheng...@gmail.com> wr

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-21 Thread Krutika Dhananjay
run give me > problem. Hope this information helps. > > Regards, > > Cw > > On Thu, Dec 15, 2016 at 8:02 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > > Good that you asked. I'll try but be warned this will involve me coming > back > > to you with lo

Re: [Gluster-devel] Regarding a consistent gfapi .t failure

2016-12-15 Thread Krutika Dhananjay
t; > Adding Niels, if he wants to take a look at this. > > > > We can mark the test bad by raising a Bz i suppose. > > > > Thanks, > > Poornima > > > > From: "Krutika Dhananjay" <kdhan...@redhat.com> > > To: "Poornima Gurusiddaiah"

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-15 Thread Krutika Dhananjay
ine libaio -directory /mnt/testSF-HDD1 > > -fallocate none -direct 1 -filesize 400g -nrfiles 1 -openfiles 1 -bs > > 8k -numjobs 1 -iodepth 2 -name test -rw randwrite > > > > The error (Sometimes segmentation fault) only happen during random write. > > > > The glust

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-12 Thread Krutika Dhananjay
Cw > > On Mon, Dec 12, 2016 at 2:27 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > > Hi, > > > > First of all, apologies for the late reply. Couldn't find time to look > into > > this > > until now. > > > > Changing SHARD_MAX_INODES

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-11 Thread Krutika Dhananjay
Hi, First of all, apologies for the late reply. Couldn't find time to look into this until now. Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick! Let me try that as well and get back to you in some time. -Krutika On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei

Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-11-07 Thread Krutika Dhananjay
On Wed, Nov 2, 2016 at 7:00 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Just finished testing VM storage use-case. > > *Volume configuration used:* > > [root@srv-1 ~]# gluster volume info > > Volume Name: rep > Type: Replicate > Volume ID: 2c603783-c1da-49b7-81

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-27 Thread Krutika Dhananjay
file) could take some time on this new brick. So will it be possible > that some random read IO to yet created shard trigger the similar > error? > > Thanks. > > Cwtan > > On Thu, Oct 27, 2016 at 4:26 PM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > > Found th

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-27 Thread Krutika Dhananjay
case executing the replace-brick/reset-brick commands should be sufficient to recover all contents from the remaining two replicas. -Krutika On Thu, Oct 27, 2016 at 12:49 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Now it's reproducible, thanks. :) > > I think I know the RC.

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-27 Thread Krutika Dhananjay
2016-10-27 04:34:46.599807] D [logging.c:1954:_gf_msg_internal] > 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. > About to flush least recently used log message to disk > The message "D [MSGID: 0] [io-threads.c:351:iot_schedule] > 0-testHeal4-io-threads: LO

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-26 Thread Krutika Dhananjay
c > (Input/output error) > [2016-10-25 03:22:01.220025] I [MSGID: 100011] > [glusterfsd.c:1323:reincarnate] 0-glusterfsd: Fetching the volume file > from server... > [2016-10-25 03:22:01.220938] I > [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in > volfile, c

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-25 Thread Krutika Dhananjay
Tried it locally on my setup. Worked fine. Could you please attach the mount logs? -Krutika On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > +Krutika > > On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei wrote: > >> Hi, >> >> I am

Re: [Gluster-devel] Physical HDD unplug test

2016-08-23 Thread Krutika Dhananjay
ror. It seems like there is some changes on the client side that fix > this error. Please note that, for this test, i still stick to 3.7.10 > server. Will it be an issue with client and server using different > version? > > Cw > > On Tue, Aug 16, 2016 at 5:46 PM, Krutika Dhananjay <kd

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-22 Thread Krutika Dhananjay
Here's one from me: Sharding in GlusterFS - Past, Present and Future I intend to cover the following in this talk: * What sharding is, what are its benefits over striping and in general.. * Current design * Use cases - VM image store/HC/ROBO * Challenges - atomicity, synchronization across

Re: [Gluster-devel] Physical HDD unplug test

2016-08-16 Thread Krutika Dhananjay
3.7.11 had quite a few bugs in afr and sharding+afr interop that were fixed in 3.7.12. Some of them were about files being reported as being in split-brain. Chances are that some of them existed in 3.7.10 as well - which is what you're using. Do you mind trying the same test with 3.7.12 or a

Re: [Gluster-devel] Deprecation of striped volumes as 'feature' for 3.9

2016-08-12 Thread Krutika Dhananjay
So sharding in its current form should still work fine in the non-VM-store use-cases as long as they are single-writer-to-large-file workloads. I am starting (actually resuming :)) work on making it useful for multiple-writers workload. Sharding will need to use locks to synchronize modification

Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-08-02 Thread Krutika Dhananjay
d make > sure it doesn't have any freeze or locking issues then will roll this out > to working cluster. > > > > *David Gossage* > *Carousel Checks Inc. | System Administrator* > *Office* 708.613.2284 > > On Wed, Jul 27, 2016 at 8:37 AM, David Gossage < > dgos

Re: [Gluster-devel] I missed a patch in earlier releases of 3.7.x which is breaking virt usecase

2016-07-30 Thread Krutika Dhananjay
http://review.gluster.org/15041 is ready to be merged. Request someone with merge access to please merge it. -Krutika On Sat, Jul 30, 2016 at 7:36 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > When are you tagging? It would be nice to get > http://review.gluster.org/15041 also in.

Re: [Gluster-devel] ./tests/basic/afr/granular-esh/add-brick.t suprious failure

2016-07-28 Thread Krutika Dhananjay
ave for a day to try and recreate the issue and then to debug further if it fails? -Krutika On Tue, Jul 26, 2016 at 3:39 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > I'll take a look. > > -Krutika > > On Tue, Jul 26, 2016 at 3:36 PM, Kotresh Hiremath Ravishankar < &

Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-07-26 Thread Krutika Dhananjay
Yes please, could you file a bug against glusterfs for this issue? -Krutika On Wed, Jul 27, 2016 at 1:39 AM, David Gossage wrote: > Has a bug report been filed for this issue or should l I create one with > the logs and results provided so far? > > *David Gossage*

Re: [Gluster-devel] ./tests/basic/afr/granular-esh/add-brick.t suprious failure

2016-07-26 Thread Krutika Dhananjay
I'll take a look. -Krutika On Tue, Jul 26, 2016 at 3:36 PM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi, > > Above mentioned AFT test has failed and is not related to the below patch. > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/22485/consoleFull >

Re: [Gluster-devel] How to solve the FSYNC() ERR

2016-07-11 Thread Krutika Dhananjay
-1 (Invalid argument) >> [2016-07-10 11:42:54.583156] W >> [client3_1-fops.c:876:client3_1_writev_cbk] 0-test-client-1: remote >> operation failed: Invalid argument >> [2016-07-10 11:42:54.583762] W [fuse-bridge.c:968:fuse_err_cbk] >> 0-glusterfs-fuse: 12398317: FSYNC

Re: [Gluster-devel] How to solve the FSYNC() ERR

2016-07-10 Thread Krutika Dhananjay
To me it looks like a case of a flush triggering a write() that was cached by write-behind and because the write buffer did not meet the page alignment requirement with o-direct write, it was failed with EINVAL and the trigger fop - i.e., flush() was failed with the 'Invalid argument' error code.

Re: [Gluster-devel] tarissue.t spurious failure

2016-05-19 Thread Krutika Dhananjay
Also, I must add that I ran it in a loop on my laptop for about 4 hours and it ran without any failure. -Krutika On Thu, May 19, 2016 at 4:42 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > tests/basic/afr/tarissue.t fails sometimes on jenkins centos slave(s). &g

[Gluster-devel] tarissue.t spurious failure

2016-05-19 Thread Krutika Dhananjay
tests/basic/afr/tarissue.t fails sometimes on jenkins centos slave(s). https://build.gluster.org/job/rackspace-regression-2GB-triggered/20915/consoleFull -Krutika ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-08 Thread Krutika Dhananjay
----- > > From: "Krutika Dhananjay" <kdhan...@redhat.com> > > To: "Pranith Karampuri" <pkara...@redhat.com> > > Cc: "gluster-infra" <gluster-in...@gluster.org>, "Gluster Devel" < > gluster-devel@gluster.org>, "

Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-07 Thread Krutika Dhananjay
It has been failing rather frequently. Have reported a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1315560 For now, have moved it to bad tests here: http://review.gluster.org/#/c/13632/1 -Krutika On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: >

[Gluster-devel] What's the correct way to enable direct-IO?

2016-02-24 Thread Krutika Dhananjay
Hi, git-grep tells me there are multiple options in our code base for enabling direct-IO on a gluster volume, at several layers in the translator stack: i) use the mount option 'direct-io-mode=enable' ii) enable 'network.remote-dio' which is a protocol/client option using volume set command

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-23 Thread Krutika Dhananjay
Raghavendra, The crash was due to bug(s) in clear-locks command implementation. Joe and I had had an offline discussion about this. -Krutika - Original Message - > From: "Raghavendra G" <raghaven...@gluster.com> > To: "Krutika Dhananjay" <kdhan..

Re: [Gluster-devel] 3.6.8 crashing a lot in production

2016-02-12 Thread Krutika Dhananjay
Taking a look. Give me some time. -Krutika - Original Message - > From: "Joe Julian" <j...@julianfamily.org> > To: "Krutika Dhananjay" <kdhan...@redhat.com>, gluster-devel@gluster.org > Sent: Saturday, February 13, 2016 6:02:13 AM > Subjec

Re: [Gluster-devel] Few details needed about *any* recent or upcoming feature

2016-01-20 Thread Krutika Dhananjay
sharding: http://blog.gluster.org/2015/12/introducing-shard-translator/ http://blog.gluster.org/2015/12/sharding-what-next-2/ -Krutika - Original Message - > From: "Niels de Vos" > To: gluster-devel@gluster.org > Sent: Wednesday, January 20, 2016 4:11:42 PM >

Re: [Gluster-devel] NetBSD tests not running to completion.

2016-01-07 Thread Krutika Dhananjay
- Original Message - > From: "Pranith Kumar Karampuri" > To: "Emmanuel Dreyfus" , "Ravishankar N" > > Cc: "Gluster Devel" , "gluster-infra" > > Sent: Friday, January 8,

Re: [Gluster-devel] Writing a new xlator is made a bit easier now

2015-12-23 Thread Krutika Dhananjay
Cool stuff! :) -Krutika - Original Message - > From: "Poornima Gurusiddaiah" > To: "Gluster Devel" > Sent: Thursday, December 24, 2015 11:32:15 AM > Subject: [Gluster-devel] Writing a new xlator is made a bit easier now > Hi All, >

Re: [Gluster-devel] Sharding - what next?

2015-12-16 Thread Krutika Dhananjay
- Original Message - > From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com> > To: "Krutika Dhananjay" <kdhan...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "gluster-users" > <gluster-us...@g

Re: [Gluster-devel] Sharding - what next?

2015-12-09 Thread Krutika Dhananjay
- Original Message - > From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com> > To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Gluster Devel" > <gluster-devel@gluster.org>, "gluster-users" <gluster-us...@gluster.org&g

[Gluster-devel] Sharding - what next?

2015-12-03 Thread Krutika Dhananjay
Hi, When we designed and wrote sharding feature in GlusterFS, our focus had been single-writer-to-large-files use cases, chief among these being the virtual machine image store use-case. Sharding, for the uninitiated, is a feature that was introduced in glusterfs-3.7.0 release with

Re: [Gluster-devel] [Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-22 Thread Krutika Dhananjay
Could you share the output of 'gluster volume info', and also information as to which node went down on reboot? -Krutika - Original Message - > From: "André Bauer" > To: "gluster-users" > Cc: gluster-devel@gluster.org > Sent: Friday,

[Gluster-devel] Regarding message ids for quota

2015-09-23 Thread Krutika Dhananjay
Hi, While working on http://review.gluster.org/#/c/12217/ , I figured that the message ids in the doxygen-friendly comments in quota-messages.h are supposed to start from 12 whereas they seem to start from 11. This would cause these to conflict with the message ids for upcall xlator.

Re: [Gluster-devel] Spurious failures

2015-09-22 Thread Krutika Dhananjay
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull Ctrl + f 'not ok'. -Krutika - Original Message - > From: "Atin Mukherjee" <amukh...@redhat.com> > To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Gluste

Re: [Gluster-devel] Spurious failures

2015-09-22 Thread Krutika Dhananjay
kh...@redhat.com> > To: "Krutika Dhananjay" <kdhan...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Gaurav Garg" > <gg...@redhat.com>, "Aravinda" <avish...@redhat.com>, "Kotresh Hiremath > Ravis

[Gluster-devel] REMINDER: Weekly gluster community meeting to start in ~2 hours

2015-09-16 Thread Krutika Dhananjay
Hi All, In about 2 hours from now we will have the regular weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC - date: every Wednesday - time: 12:00 UTC, 14:00 CEST, 17:30 IST (in your terminal, run: date -d "12:00 UTC") - agenda:

[Gluster-devel] Jenkins NetBSD slaves hung?

2015-09-11 Thread Krutika Dhananjay
Hi, It looks like the NetBSD slaves 75, 77 and 7j are hung? And since these are the only slaves online, there seems to be no progress wrt the execution of jobs lined up in the queue. -Krutika ___ Gluster-devel mailing list

[Gluster-devel] NetBSD spurious failure: tests/basic/mount-nfs-auth.t

2015-09-11 Thread Krutika Dhananjay
Hi, tests/basic/mount-nfs-auth.t has failed a couple of times in the recent past on NetBSD (release-3.7). Here's one of those instances: https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/10196/consoleFull -Krutika ___

Re: [Gluster-devel] Gluster Sharding and Geo-replication

2015-09-03 Thread Krutika Dhananjay
- Original Message - > From: "Venky Shankar" <vshan...@redhat.com> > To: "Aravinda" <avish...@redhat.com> > Cc: "Shyam" <srang...@redhat.com>, "Krutika Dhananjay" <kdhan...@redhat.com>, > "Gluster Devel"

Re: [Gluster-devel] Gluster Sharding and Geo-replication

2015-09-03 Thread Krutika Dhananjay
- Original Message - > From: "Sahina Bose" <sab...@redhat.com> > To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Shyam" <srang...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org> > Sent: Thursday,

[Gluster-devel] REMINDER: Weekly gluster community meeting to start in ~2 hours

2015-09-02 Thread Krutika Dhananjay
Hi All, In about 2 hours from now we will have the regular weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC - date: every Wednesday - time: 12:00 UTC, 14:00 CEST, 17:30 IST (in your terminal, run: date -d "12:00 UTC") - agenda:

[Gluster-devel] Minutes of the Weekly Gluster Community Meeting held on 2nd September, 2015

2015-09-02 Thread Krutika Dhananjay
Minutes: Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.txt Log:

Re: [Gluster-devel] Gluster Sharding and Geo-replication

2015-09-02 Thread Krutika Dhananjay
- Original Message - > From: "Shyam" > To: "Aravinda" , "Gluster Devel" > > Sent: Wednesday, September 2, 2015 8:09:55 PM > Subject: Re: [Gluster-devel] Gluster Sharding and Geo-replication > On 09/02/2015 03:12 AM,

Re: [Gluster-devel] Inconsistent behavior due to lack of lookup on entry followed by readdirp

2015-08-13 Thread Krutika Dhananjay
- Original Message - From: Raghavendra Gowdappa rgowd...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Mohammed Rafi K C rkavu...@redhat.com, Gluster Devel gluster-devel@gluster.org, Dan Lambright dlamb...@redhat.com, Nithya Balachandran nbala...@redhat.com, Ben Turner

Re: [Gluster-devel] Inconsistent behavior due to lack of lookup on entry followed by readdirp

2015-08-12 Thread Krutika Dhananjay
I faced the same issue with the sharding translator. I fixed it by making its readdirp callback initialize individual entries' inode ctx, some of these being xattr values, which are filled in entry-dict by the posix translator. Here is the patch that got merged recently:

Re: [Gluster-devel] Inconsistent behavior due to lack of lookup on entry followed by readdirp

2015-08-12 Thread Krutika Dhananjay
- Original Message - From: Raghavendra Gowdappa rgowd...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Mohammed Rafi K C rkavu...@redhat.com, Gluster Devel gluster-devel@gluster.org, Dan Lambright dlamb...@redhat.com, Nithya Balachandran nbala...@redhat.com, Ben Turner

[Gluster-devel] Minutes of the Weekly Gluster Community Meeting - 5th August 2015

2015-08-05 Thread Krutika Dhananjay
The minutes of the weekly community meeting held today can be found at: Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.txt

[Gluster-devel] NetBSD spurious failures

2015-07-24 Thread Krutika Dhananjay
Hi, The following tests seem to be failing fairly consistently on NetBSD: tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t - http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8661/consoleFull tests/basic/tier/tier_lookup_heal.t -

Re: [Gluster-devel] spurious failure with test-case ./tests/basic/tier/tier.t

2015-07-22 Thread Krutika Dhananjay
This test failed twice on my patch even after retrigger: https://build.gluster.org/job/rackspace-regression-2GB-triggered/12726/consoleFull https://build.gluster.org/job/rackspace-regression-2GB-triggered/12739/consoleFull Is this a new issue or the one that was originally reported?

Re: [Gluster-devel] tests/bugs/glusterd/bug-948686.t gave a core

2015-06-08 Thread Krutika Dhananjay
The patch @ http://review.gluster.org/#/c/9/ fixes this issue. Review(s) requested. -Krutika - Original Message - From: Krutika Dhananjay kdhan...@redhat.com To: Soumya Koduri skod...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Monday, June 8, 2015 11:38:00 AM

Re: [Gluster-devel] tests/bugs/glusterd/bug-948686.t gave a core

2015-06-04 Thread Krutika Dhananjay
...@redhat.com, Krutika Dhananjay kdhan...@redhat.com, Anuradha Talur ata...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Thursday, June 4, 2015 6:38:33 PM Subject: tests/bugs/glusterd/bug-948686.t gave a core Glustershd is crashing because afr wound xattrop with null gfid in loc

Re: [Gluster-devel] Regarding the size parameter in readdir(p) fops

2015-05-28 Thread Krutika Dhananjay
- From: Niels de Vos nde...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, May 19, 2015 3:12:20 PM Subject: Re: [Gluster-devel] Regarding the size parameter in readdir(p) fops On Tue, May 19, 2015 at 05:12:35AM -0400, Krutika

[Gluster-devel] Spurious failure fix - entry-self-heal.t

2015-05-26 Thread Krutika Dhananjay
Pranith, I sent a patch to fix the spurious failure in entry-self-heal.t @ http://review.gluster.org/#/c/10916/ . Requesting a review on the same. -Krutika ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] Regarding the size parameter in readdir(p) fops

2015-05-19 Thread Krutika Dhananjay
Hi, The following patch fixes an issue with readdir(p) in shard xlator: http://review.gluster.org/#/c/10809/ whose details can be found in the commit message. One side effect of this is that from shard xlator, the size of the dirents list returned to the translators above it could be

Re: [Gluster-devel] Regression host hung on tests/basic/afr/split-brain-healing.t

2015-02-27 Thread Krutika Dhananjay
...@redhat.com, Krutika Dhananjay kdhan...@redhat.com, Anuradha Talur ata...@redhat.com Sent: Thursday, February 26, 2015 5:11:20 PM Subject: Re: Regression host hung on tests/basic/afr/split-brain-healing.t On 26 Feb 2015, at 08:57, Pranith Kumar Karampuri pkara...@redhat.com wrote: On 02/26/2015

Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures - design

2015-02-24 Thread Krutika Dhananjay
- Original Message - From: Vijay Bellur vbel...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, February 24, 2015 12:26:58 PM Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures

Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures - design

2015-02-24 Thread Krutika Dhananjay
- Original Message - From: Vijay Bellur vbel...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, February 24, 2015 4:13:13 PM Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures

Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures - design

2015-02-23 Thread Krutika Dhananjay
- Original Message - From: Vijay Bellur vbel...@redhat.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, February 24, 2015 11:35:28 AM Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability from failures

[Gluster-devel] Regarding client-side healing in AFR-v2

2014-12-30 Thread Krutika Dhananjay
Hello, The doc @ http://www.gluster.org/community/documentation/index.php/Features/afrv2 says client-side healing is completely removed in AFR-v2. However, there IS code in afr_inode_refresh() code path that performs client-side healing in AFR-v2. Could you please explain what led to this

Re: [Gluster-devel] AFR conservative merge portability

2014-12-14 Thread Krutika Dhananjay
sends a SETATTR for spb_heal ctime/mtine, and since the other brick is down, here is our metadata split brain. In http://review.gluster.org/9267, Krutika Dhananjay fixes the test by clearing AFR xattr to remove the split brain state, but while it let the test pass, it does not address the real

  1   2   >