So SHARD_MIN_BLOCK_SIZE is currently hardcoded to 4MB.
If you want to reduce that value further (purely for the sake of testing,
not recommended otherwise), you will need to change its value in the source
code here -
Hi,
I've proposed a solution to the problem of space running out in some
children of DHT even when its other children have free space available,
here - https://github.com/gluster/glusterfs/issues/675.
The proposal aims to solve a very specific instance of this generic class
of problems where
explain what remote-dio and strict-o-direct variables changed in
> behaviour of my Gluster? It would be great for later archive/users to
> understand what and why this solved my issue.
>
> Anyway, Thanks a LOT!!!
>
> BR,
> Martin
>
> On 13 May 2019, at 10:20, Krutika Dhananjay wr
/send profiling info after some VM will be failed. I
> suppose this is correct profiling strategy.
>
About this, how many vms do you need to recreate it? A single vm? Or
multiple vms doing IO in parallel?
> Thanks,
> BR!
> Martin
>
> On 13 May 2019, at 09:21, Kr
Also, what's the caching policy that qemu is using on the affected vms?
Is it cache=none? Or something else? You can get this information in the
command line of qemu-kvm process corresponding to your vm in the ps output.
-Krutika
On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay
wrote:
> W
What version of gluster are you using?
Also, can you capture and share volume-profile output for a run where you
manage to recreate this issue?
https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
Let me know if you have any
Sent https://review.gluster.org/c/glusterfs/+/21889 to fix the original
issue.
-Krutika
On Wed, Dec 5, 2018 at 10:58 AM Atin Mukherjee wrote:
> We can't afford to keep a bad test hanging for more than a day which
> penalizes other fixes to be blocked (I see atleast 4-5 more patches failed
>
On Thu, Oct 11, 2018 at 8:55 PM Shyam Ranganathan
wrote:
> So we are through with a series of checks and tasks on release-5 (like
> ensuring all backports to other branches are present in 5, upgrade
> testing, basic performance testing, Package testing, etc.), but still
> need the following
On Wed, Oct 10, 2018 at 8:30 PM Shyam Ranganathan
wrote:
> The following options were added post 4.1 and are part of 5.0 as the
> first release for the same. They were added in as part of bugs, and
> hence looking at github issues to track them as enhancements did not
> catch the same.
>
> We
consistently reproducible. I am still debugging this to see which
>> patch caused this.
>>
>> regards,
>> Nithya
>>
>>
>> On 2 August 2018 at 07:13, Atin Mukherjee
>> wrote:
>>
>>>
>>>
>>> On Thu, 2 Aug 2018 at 07:05, Susant Pal
Same here - https://build.gluster.org/job/centos7-regression/2024/console
-Krutika
On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee wrote:
> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times)
> running with master branch. As per my knowledge I've not seen this test
> failing
We do already have a way to get inodelk and entrylk count from a bunch of
fops, introduced in http://review.gluster.org/10880.
Can you check if you can make use of this feature?
-Krutika
On Wed, Jun 20, 2018 at 9:17 AM, Amar Tumballi wrote:
>
>
> On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra
On Mon, Jul 24, 2017 at 3:54 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi,
>
> Following patches are targeted for 3.12. It has undergone few reviews and
> yet it
> to merged. Please take some time to review and merge if it looks good.
>
>
Hi Poornima and Pranith,
I see that currently glusterd loads nl-cache between stat-prefetch and
open-behind on the client stack. Were there any specific considerations for
selecting this position for nl-cache?
I was interested to see the performance impact of loading this translator
between
The patch[1] that introduced tests/basic/stats-dump.t was merged in October
2015 and
my patch underwent (and passed too![2]) centos regression tests, including
stats-dump.t on 05 June, 2017.
The only change that the test script underwent during this time was this
line in 2016, which is harmless:
This is based on my high-level understanding of quota in gluster -
supporting quota on sharded volumes will require us to make quota xlator
account for shards residing under the hidden ".shard" directory per file
and adding this to the quota-xattr representing aggregated consumed size on
the
Apologies. Pressed 'send' even before I was done.
On Tue, Jun 20, 2017 at 11:39 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Some update on this topic:
>
> I ran fio again, this time with Raghavendra's epoll-rearm patch @
> https://review.gluster.org/17391
>
> Th
throughput here is
> far too low for that to be the case.
>
> -- Manoj
>
>
> On Thu, Jun 8, 2017 at 6:37 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Indeed the latency on the client side dropped with iodepth=1. :)
>> I ran the test twice and the r
from the client-side,
> that would strengthen the hypothesis than serialization/contention among
> concurrent requests at the n/w layers is the root cause here.
>
> -- Manoj
>
>
> On Thu, Jun 8, 2017 at 11:46 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
, Raghavendra G <raghaven...@gluster.com>
wrote:
>
>
> On Wed, Jun 7, 2017 at 11:59 AM, Xavier Hernandez <xhernan...@datalab.es>
> wrote:
>
>> Hi Krutika,
>>
>> On 06/06/17 13:35, Krutika Dhananjay wrote:
>>
>>> Hi,
>>>
>>>
bandwidth? How much of the network
> bandwidth is in use while the test is being run? Wondering if there is
> saturation in the network layer.
>
> -Vijay
>
> On Tue, Jun 6, 2017 at 7:35 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Hi,
>>
>>
/fio_6
filename=/perf7/iotest/fio_7
filename=/perf8/iotest/fio_8
I have 3 vms reading from one mount, and each of these vms is running the
above job in parallel.
-Krutika
On Tue, Jun 6, 2017 at 9:14 PM, Manoj Pillai <mpil...@redhat.com> wrote:
>
>
> On Tue, Jun 6, 2017 at 5
Hi,
As part of identifying performance bottlenecks within gluster stack for VM
image store use-case, I loaded io-stats at multiple points on the client
and brick stack and ran randrd test using fio from within the hosted vms in
parallel.
Before I get to the results, a little bit about the
back.
-Krutika
On Wed, May 31, 2017 at 3:08 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
>
> On Tue, May 30, 2017 at 6:42 PM, Shyam <srang...@redhat.com> wrote:
>
>> On 05/30/2017 05:28 AM, Krutika Dhananjay wrote:
>>
>>> You're rig
On Tue, May 30, 2017 at 6:42 PM, Shyam <srang...@redhat.com> wrote:
> On 05/30/2017 05:28 AM, Krutika Dhananjay wrote:
>
>> You're right. With brick graphs, this will be a problem.
>>
>> Couple of options:
>>
>> 1. To begin with we identify points where
Xavi and I would like to propose transaction framework for 4.0 as a stretch
goal.
-Krutika
On Tue, May 16, 2017 at 6:16 AM, Shyam wrote:
> Hi,
>
> Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been
> quite a few discussions around this in various
-profile start command for the brief period where they want to
capture the stats and disable it as soon as they're done.
Let me know what you think.
-Krutika
On Fri, May 26, 2017 at 9:19 PM, Shyam <srang...@redhat.com> wrote:
> On 05/26/2017 05:44 AM, Krutika Dhananjay wrote
Hi,
Work is in progress for this (and going a bit slow at the moment because of
other priorities).
At the moment we support sharding only for VM image store use-case - most
common large file + single writer use case we know of.
Just curious, what is the use case where you want to use shard+EC?
Hi Kotresh,
The test tests/bitrot/br-state-check.t is dumping core consistently in
release-3.8 branch in NetBSD on a patch in sharding, which is never enabled
throughout the test.
Here are the two failed runs:
https://build.gluster.org/job/netbsd7-regression/3506/consoleFull
Found the bug. Please see
https://review.gluster.org/#/c/16462/5/xlators/storage/posix/src/posix-handle.c@977
Will be posting the fix in some time.
-Krutika
On Tue, Feb 28, 2017 at 5:45 PM, Atin Mukherjee wrote:
>
>
> On Tue, Feb 28, 2017 at 5:10 PM, Atin Mukherjee
It should be possible to write a script that stitches the different pieces
of a single file together
(although with a few caveats).
-Krutika
On Sun, Feb 26, 2017 at 8:52 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Would be possible to add a command to use in case of
And for replicate-1, it would be
priv->pending_key[0] = "trusted.afr.dis-rep-client-3"
priv->pending_key[1] = "trusted.afr.dis-rep-client-4"
priv->pending_key[2] = "trusted.afr.dis-rep-client-5"
HTH,
Krutika
On Tue, Feb 14, 2017 at 4:58 PM, jayakrishnan mm
On Tue, Feb 14, 2017 at 1:01 PM, jayakrishnan mm <jayakrishnan...@gmail.com>
wrote:
>
>
>
>
>
> On Mon, Feb 13, 2017 at 7:07 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Hi JK,
>>
>> On Mon, Feb 13, 2017 at 1:06 PM, jayakrishn
his issue, do you have any new update?
>
> Cw
>
> On Fri, Dec 23, 2016 at 1:05 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
> > Perfect. That's what I needed to know. Thanks! :)
> >
> > -Krutika
> >
> > On Fri, Dec 23, 2016 at 7:15 AM, qingwe
el_init (old=0x7f5d6c003d10) at ../../../../libglusterfs/src/
> list.h:87
> 87old->prev->next = old->next;
> (gdb) select-frame 3
> (gdb) print local->fop
> $1 = GF_FOP_WRITE
> (gdb)
>
> Hopefully is useful for your investigation.
>
> Thanks.
>
PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Thanks for this. The information seems sufficient at the moment.
> Will get back to you on this if/when I find something.
>
> -Krutika
>
> On Mon, Dec 19, 2016 at 1:44 PM, qingwei wei <tcheng...@gmail.com> wr
run give me
> problem. Hope this information helps.
>
> Regards,
>
> Cw
>
> On Thu, Dec 15, 2016 at 8:02 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
> > Good that you asked. I'll try but be warned this will involve me coming
> back
> > to you with lo
t; > Adding Niels, if he wants to take a look at this.
> >
> > We can mark the test bad by raising a Bz i suppose.
> >
> > Thanks,
> > Poornima
> >
> > From: "Krutika Dhananjay" <kdhan...@redhat.com>
> > To: "Poornima Gurusiddaiah"
ine libaio -directory /mnt/testSF-HDD1
> > -fallocate none -direct 1 -filesize 400g -nrfiles 1 -openfiles 1 -bs
> > 8k -numjobs 1 -iodepth 2 -name test -rw randwrite
> >
> > The error (Sometimes segmentation fault) only happen during random write.
> >
> > The glust
Cw
>
> On Mon, Dec 12, 2016 at 2:27 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
> > Hi,
> >
> > First of all, apologies for the late reply. Couldn't find time to look
> into
> > this
> > until now.
> >
> > Changing SHARD_MAX_INODES
Hi,
First of all, apologies for the late reply. Couldn't find time to look into
this
until now.
Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick!
Let me try that as well and get back to you in some time.
-Krutika
On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei
On Wed, Nov 2, 2016 at 7:00 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Just finished testing VM storage use-case.
>
> *Volume configuration used:*
>
> [root@srv-1 ~]# gluster volume info
>
> Volume Name: rep
> Type: Replicate
> Volume ID: 2c603783-c1da-49b7-81
file) could take some time on this new brick. So will it be possible
> that some random read IO to yet created shard trigger the similar
> error?
>
> Thanks.
>
> Cwtan
>
> On Thu, Oct 27, 2016 at 4:26 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
> > Found th
case executing
the replace-brick/reset-brick
commands should be sufficient to recover all contents from the remaining
two replicas.
-Krutika
On Thu, Oct 27, 2016 at 12:49 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Now it's reproducible, thanks. :)
>
> I think I know the RC.
2016-10-27 04:34:46.599807] D [logging.c:1954:_gf_msg_internal]
> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
> About to flush least recently used log message to disk
> The message "D [MSGID: 0] [io-threads.c:351:iot_schedule]
> 0-testHeal4-io-threads: LO
c
> (Input/output error)
> [2016-10-25 03:22:01.220025] I [MSGID: 100011]
> [glusterfsd.c:1323:reincarnate] 0-glusterfsd: Fetching the volume file
> from server...
> [2016-10-25 03:22:01.220938] I
> [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in
> volfile, c
Tried it locally on my setup. Worked fine.
Could you please attach the mount logs?
-Krutika
On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> +Krutika
>
> On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei wrote:
>
>> Hi,
>>
>> I am
ror. It seems like there is some changes on the client side that fix
> this error. Please note that, for this test, i still stick to 3.7.10
> server. Will it be an issue with client and server using different
> version?
>
> Cw
>
> On Tue, Aug 16, 2016 at 5:46 PM, Krutika Dhananjay <kd
Here's one from me:
Sharding in GlusterFS - Past, Present and Future
I intend to cover the following in this talk:
* What sharding is, what are its benefits over striping and in general..
* Current design
* Use cases - VM image store/HC/ROBO
* Challenges - atomicity, synchronization across
3.7.11 had quite a few bugs in afr and sharding+afr interop that were fixed
in 3.7.12.
Some of them were about files being reported as being in split-brain.
Chances are that some of them existed in 3.7.10 as well - which is what
you're using.
Do you mind trying the same test with 3.7.12 or a
So sharding in its current form should still work fine in the non-VM-store
use-cases
as long as they are single-writer-to-large-file workloads.
I am starting (actually resuming :)) work on making it useful for
multiple-writers workload.
Sharding will need to use locks to synchronize modification
d make
> sure it doesn't have any freeze or locking issues then will roll this out
> to working cluster.
>
>
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Wed, Jul 27, 2016 at 8:37 AM, David Gossage <
> dgos
http://review.gluster.org/15041 is ready to be merged.
Request someone with merge access to please merge it.
-Krutika
On Sat, Jul 30, 2016 at 7:36 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> When are you tagging? It would be nice to get
> http://review.gluster.org/15041 also in.
ave for a day to try
and recreate the issue and then to debug further if it fails?
-Krutika
On Tue, Jul 26, 2016 at 3:39 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> I'll take a look.
>
> -Krutika
>
> On Tue, Jul 26, 2016 at 3:36 PM, Kotresh Hiremath Ravishankar <
&
Yes please, could you file a bug against glusterfs for this issue?
-Krutika
On Wed, Jul 27, 2016 at 1:39 AM, David Gossage
wrote:
> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
I'll take a look.
-Krutika
On Tue, Jul 26, 2016 at 3:36 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi,
>
> Above mentioned AFT test has failed and is not related to the below patch.
>
>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22485/consoleFull
>
-1 (Invalid argument)
>> [2016-07-10 11:42:54.583156] W
>> [client3_1-fops.c:876:client3_1_writev_cbk] 0-test-client-1: remote
>> operation failed: Invalid argument
>> [2016-07-10 11:42:54.583762] W [fuse-bridge.c:968:fuse_err_cbk]
>> 0-glusterfs-fuse: 12398317: FSYNC
To me it looks like a case of a flush triggering a write() that was cached
by write-behind and because the write buffer
did not meet the page alignment requirement with o-direct write, it was
failed with EINVAL and the trigger fop - i.e., flush() was failed with the
'Invalid argument' error code.
Also, I must add that I ran it in a loop on my laptop for about 4 hours and
it ran without any failure.
-Krutika
On Thu, May 19, 2016 at 4:42 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> tests/basic/afr/tarissue.t fails sometimes on jenkins centos slave(s).
&g
tests/basic/afr/tarissue.t fails sometimes on jenkins centos slave(s).
https://build.gluster.org/job/rackspace-regression-2GB-triggered/20915/consoleFull
-Krutika
___
Gluster-devel mailing list
Gluster-devel@gluster.org
-----
> > From: "Krutika Dhananjay" <kdhan...@redhat.com>
> > To: "Pranith Karampuri" <pkara...@redhat.com>
> > Cc: "gluster-infra" <gluster-in...@gluster.org>, "Gluster Devel" <
> gluster-devel@gluster.org>, "
It has been failing rather frequently.
Have reported a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1315560
For now, have moved it to bad tests here:
http://review.gluster.org/#/c/13632/1
-Krutika
On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
Hi,
git-grep tells me there are multiple options in our code base for enabling
direct-IO on a gluster volume, at several layers in the translator stack:
i) use the mount option 'direct-io-mode=enable'
ii) enable 'network.remote-dio' which is a protocol/client option using volume
set command
Raghavendra,
The crash was due to bug(s) in clear-locks command implementation. Joe and I
had had an offline discussion about this.
-Krutika
- Original Message -
> From: "Raghavendra G" <raghaven...@gluster.com>
> To: "Krutika Dhananjay" <kdhan..
Taking a look. Give me some time.
-Krutika
- Original Message -
> From: "Joe Julian" <j...@julianfamily.org>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>, gluster-devel@gluster.org
> Sent: Saturday, February 13, 2016 6:02:13 AM
> Subjec
sharding:
http://blog.gluster.org/2015/12/introducing-shard-translator/
http://blog.gluster.org/2015/12/sharding-what-next-2/
-Krutika
- Original Message -
> From: "Niels de Vos"
> To: gluster-devel@gluster.org
> Sent: Wednesday, January 20, 2016 4:11:42 PM
>
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Emmanuel Dreyfus" , "Ravishankar N"
>
> Cc: "Gluster Devel" , "gluster-infra"
>
> Sent: Friday, January 8,
Cool stuff! :)
-Krutika
- Original Message -
> From: "Poornima Gurusiddaiah"
> To: "Gluster Devel"
> Sent: Thursday, December 24, 2015 11:32:15 AM
> Subject: [Gluster-devel] Writing a new xlator is made a bit easier now
> Hi All,
>
- Original Message -
> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "gluster-users"
> <gluster-us...@g
- Original Message -
> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Gluster Devel"
> <gluster-devel@gluster.org>, "gluster-users" <gluster-us...@gluster.org&g
Hi,
When we designed and wrote sharding feature in GlusterFS, our focus had been
single-writer-to-large-files use cases, chief among these being the virtual
machine image store use-case.
Sharding, for the uninitiated, is a feature that was introduced in
glusterfs-3.7.0 release with
Could you share the output of 'gluster volume info', and also information as to
which node went down on reboot?
-Krutika
- Original Message -
> From: "André Bauer"
> To: "gluster-users"
> Cc: gluster-devel@gluster.org
> Sent: Friday,
Hi,
While working on http://review.gluster.org/#/c/12217/ , I figured that the
message ids in the doxygen-friendly comments in quota-messages.h are supposed
to start from 12 whereas they seem to start from 11.
This would cause these to conflict with the message ids for upcall xlator.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull
Ctrl + f 'not ok'.
-Krutika
- Original Message -
> From: "Atin Mukherjee" <amukh...@redhat.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Gluste
kh...@redhat.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Gaurav Garg"
> <gg...@redhat.com>, "Aravinda" <avish...@redhat.com>, "Kotresh Hiremath
> Ravis
Hi All,
In about 2 hours from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Hi,
It looks like the NetBSD slaves 75, 77 and 7j are hung? And since these are the
only slaves online, there seems to be no progress wrt the execution of jobs
lined up in the queue.
-Krutika
___
Gluster-devel mailing list
Hi,
tests/basic/mount-nfs-auth.t has failed a couple of times in the recent past on
NetBSD (release-3.7).
Here's one of those instances:
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/10196/consoleFull
-Krutika
___
- Original Message -
> From: "Venky Shankar" <vshan...@redhat.com>
> To: "Aravinda" <avish...@redhat.com>
> Cc: "Shyam" <srang...@redhat.com>, "Krutika Dhananjay" <kdhan...@redhat.com>,
> "Gluster Devel"
- Original Message -
> From: "Sahina Bose" <sab...@redhat.com>
> To: "Krutika Dhananjay" <kdhan...@redhat.com>, "Shyam" <srang...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Thursday,
Hi All,
In about 2 hours from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Minutes:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-02/gluster-meeting.2015-09-02-12.00.txt
Log:
- Original Message -
> From: "Shyam"
> To: "Aravinda" , "Gluster Devel"
>
> Sent: Wednesday, September 2, 2015 8:09:55 PM
> Subject: Re: [Gluster-devel] Gluster Sharding and Geo-replication
> On 09/02/2015 03:12 AM,
- Original Message -
From: Raghavendra Gowdappa rgowd...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Mohammed Rafi K C rkavu...@redhat.com, Gluster Devel
gluster-devel@gluster.org, Dan Lambright dlamb...@redhat.com, Nithya
Balachandran nbala...@redhat.com, Ben Turner
I faced the same issue with the sharding translator. I fixed it by making its
readdirp callback initialize individual entries' inode ctx, some of these being
xattr values, which are filled in entry-dict by the posix translator.
Here is the patch that got merged recently:
- Original Message -
From: Raghavendra Gowdappa rgowd...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Mohammed Rafi K C rkavu...@redhat.com, Gluster Devel
gluster-devel@gluster.org, Dan Lambright dlamb...@redhat.com, Nithya
Balachandran nbala...@redhat.com, Ben Turner
The minutes of the weekly community meeting held today can be found at:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.txt
Hi,
The following tests seem to be failing fairly consistently on NetBSD:
tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t -
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8661/consoleFull
tests/basic/tier/tier_lookup_heal.t -
This test failed twice on my patch even after retrigger:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/12726/consoleFull
https://build.gluster.org/job/rackspace-regression-2GB-triggered/12739/consoleFull
Is this a new issue or the one that was originally reported?
The patch @ http://review.gluster.org/#/c/9/ fixes this issue. Review(s)
requested.
-Krutika
- Original Message -
From: Krutika Dhananjay kdhan...@redhat.com
To: Soumya Koduri skod...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Monday, June 8, 2015 11:38:00 AM
...@redhat.com, Krutika Dhananjay
kdhan...@redhat.com, Anuradha Talur ata...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, June 4, 2015 6:38:33 PM
Subject: tests/bugs/glusterd/bug-948686.t gave a core
Glustershd is crashing because afr wound xattrop with null gfid in loc
-
From: Niels de Vos nde...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, May 19, 2015 3:12:20 PM
Subject: Re: [Gluster-devel] Regarding the size parameter in readdir(p) fops
On Tue, May 19, 2015 at 05:12:35AM -0400, Krutika
Pranith,
I sent a patch to fix the spurious failure in entry-self-heal.t @
http://review.gluster.org/#/c/10916/ .
Requesting a review on the same.
-Krutika
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Hi,
The following patch fixes an issue with readdir(p) in shard xlator:
http://review.gluster.org/#/c/10809/ whose details can be found in the commit
message.
One side effect of this is that from shard xlator, the size of the dirents list
returned to the translators above it could be
...@redhat.com, Krutika Dhananjay kdhan...@redhat.com,
Anuradha Talur ata...@redhat.com
Sent: Thursday, February 26, 2015 5:11:20 PM
Subject: Re: Regression host hung on tests/basic/afr/split-brain-healing.t
On 26 Feb 2015, at 08:57, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
On 02/26/2015
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, February 24, 2015 12:26:58 PM
Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability
from failures
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, February 24, 2015 4:13:13 PM
Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability
from failures
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, February 24, 2015 11:35:28 AM
Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability
from failures
Hello,
The doc @
http://www.gluster.org/community/documentation/index.php/Features/afrv2 says
client-side healing is completely removed in AFR-v2.
However, there IS code in afr_inode_refresh() code path that performs
client-side healing in AFR-v2.
Could you please explain what led to this
sends a
SETATTR for spb_heal ctime/mtine, and since the other brick is down,
here is our metadata split brain.
In http://review.gluster.org/9267, Krutika Dhananjay fixes the test by
clearing AFR xattr to remove the split brain state, but while it let the
test pass, it does not address the real
1 - 100 of 112 matches
Mail list logo