Sorry about the late response.
I looked at the logs. These errors are originating from posix-acl
translator -
*[2019-11-17 07:55:47.090065] E [MSGID: 115050]
[server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162496:
LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.6
Hi Pratik,
What's the version of gluster you're running?
Also, what would help is the volume profile output. Here's what you should
do to capture it:
# gluster volume profile start
# run_your_workload
# gluster volume profile info > brick-profile.out
And attach brick-profile.out here.
Adding (back) gluster-users.
-Krutika
On Fri, Jun 21, 2019 at 1:09 PM Krutika Dhananjay
wrote:
>
>
> On Fri, Jun 21, 2019 at 12:43 PM Cristian Del Carlo <
> cristian.delca...@targetsolutions.it> wrote:
>
>> Thanks Strahil,
>>
>> in this link
>>
explain what remote-dio and strict-o-direct variables changed in
> behaviour of my Gluster? It would be great for later archive/users to
> understand what and why this solved my issue.
>
> Anyway, Thanks a LOT!!!
>
> BR,
> Martin
>
> On 13 May 2019, at 10:20, Krutika Dhananjay wr
/send profiling info after some VM will be failed. I
> suppose this is correct profiling strategy.
>
About this, how many vms do you need to recreate it? A single vm? Or
multiple vms doing IO in parallel?
> Thanks,
> BR!
> Martin
>
> On 13 May 2019, at 09:21, Kr
Also, what's the caching policy that qemu is using on the affected vms?
Is it cache=none? Or something else? You can get this information in the
command line of qemu-kvm process corresponding to your vm in the ps output.
-Krutika
On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay
wrote:
> W
What version of gluster are you using?
Also, can you capture and share volume-profile output for a run where you
manage to recreate this issue?
https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
Let me know if you have any
On Fri, Apr 19, 2019 at 12:48 PM wrote:
> On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote:
> > Looks good mostly.
> > You can also turn on performance.stat-prefetch, and also set
>
> Ah the corruption bug has been fixed, I missed that. Great !
>
Looks good mostly.
You can also turn on performance.stat-prefetch, and also set
client.event-threads and server.event-threads to 4.
And if your bricks are on ssds, then you could also enable
performance.client-io-threads.
And if your bricks and hypervisors are on same set of machines
Adding back gluster-users
Comments inline ...
On Fri, Mar 29, 2019 at 8:11 PM Olaf Buitelaar
wrote:
> Dear Krutika,
>
>
>
> 1. I’ve made 2 profile runs of around 10 minutes (see files
> profile_data.txt and profile_data2.txt). Looking at it, most time seems be
> spent at the fop’s fsync and
Questions/comments inline ...
On Thu, Mar 28, 2019 at 10:18 PM wrote:
> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3
s really went down - performing inside vm fio tests.
>
> On Wed, Mar 27, 2019, 07:03 Krutika Dhananjay wrote:
>
>> Could you enable strict-o-direct and disable remote-dio on the src volume
>> as well, restart the vms on "old" and retry migration?
>>
>> # gluster v
jen wrote:
> On 26-03-19 14:23, Sahina Bose wrote:
> > +Krutika Dhananjay and gluster ml
> >
> > On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen
> wrote:
> >> Hello,
> >>
> >> tl;dr We have disk corruption when doing live storage migration on oVirt
&
On Fri, Feb 15, 2019 at 12:30 AM Jayme wrote:
> Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage.
> I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high
> on one particular drive. Specifically I've noticed one volume is much much
> higher total bytes
Hi,
So the main issue is that certain vms seem to be pausing? Did I understand
that right?
Could you share the gluster-mount logs around the time the pause was seen?
And the brick logs too please?
As for ESTALE errors, the real cause of pauses can be determined from
errors/warnings logged by
info.
>
> After a long time the preallocated disk has been created properly. It was
> a 1TB disk on a hdd pool so a bit of delay was expected.
>
> But it took a bit longer then expected. The disk had no other virtual
> disks on it. Is there something I can tweak or check for this?
>
>
s on it. Is there something I can tweak or check for this?
>
> Regards, Jorick
>
> On 10/31/2018 01:10 PM, Krutika Dhananjay wrote:
>
> These log messages represent a transient state and are harmless and can be
> ignored. This happens when a lookup and mknod to c
These log messages represent a transient state and are harmless and can be
ignored. This happens when a lookup and mknod to create shards happen in
parallel.
Regarding the preallocated disk creation issue, could you check if there
are any errors/warnings in the fuse mount logs (these are named as
Hi,
Apologies for the late reply. My email filters are messed up, I missed
reading this.
Answers to questions around shard algorithm inline ...
On Sun, Sep 30, 2018 at 9:54 PM Ashayam Gupta
wrote:
> Hi Pranith,
>
> Thanks for you reply, it would be helpful if you can please help us with
> the
a16c-1d07-4153-8d01-b9b0ffd9d19b.16177)
> ==> (No data available) [No data available]
>
> if so what does it mean?
>
> Dan
>
> On Tue, Aug 16, 2016 at 1:21 AM, Krutika Dhananjay
> wrote:
>
>> Thanks, I just sent http://review.gluster.org/#/c/15161/1 to reduc
Could you share the gluster brick and mount logs?
-Krutika
On Mon, Jun 11, 2018 at 2:14 PM, wrote:
> Hi,
>
> Given the numerous problems we've had with setting up gluster for VM
> hosting at the start, we've been staying with 3.7.15, which was the
> first version to work properly.
>
> However
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from
qemu had pointed out that this would be fixed by the following commit:
commit
The short answer is - no there exists no script currently that can piece
the shards together into a single file.
Long answer:
IMO the safest way to convert from sharded to a single file _is_ by copying
the data out into a new volume at the moment.
Picking up the files from the individual bricks
Sorry about the late reply, I missed seeing your mail.
To begin with, what is your use-case? Sharding is currently supported only
for virtual machine image storage use-case.
It *could* work in other single-writer use-cases but it's only tested
thoroughly for the vm use-case.
If yours is not a vm
On Thu, Apr 5, 2018 at 7:33 AM, Ian Halliday wrote:
> Hello,
>
> I wanted to post this as a question to the group before we go launch it in
> a test environment. Will Gluster handle enabling sharding on an existing
> distributed-replicated environment, and is it safe to do?
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday wrote:
> Hello all,
>
> We are
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
wrote:
> Hi again,
>
> here is more information regarding issue described earlier
>
> It looks like self healing is stuck. According to "heal statistics" crawl
> began at Sat Jan 20 12:56:19 2018 and it's still going on
ard.block-size=0x0400
> trusted.glusterfs.shard.file-size=0xc153
> 0060be90
>
> Hope this helps.
>
>
>
> Il 17/01/2018 11:37, Ing. Luca Lazzeroni - Trend Servizi Srl ha scritto:
>
> I actually use FUSE an
ntly without problems. I've installed 4 vm and updated them
> without problems.
>
>
>
> Il 17/01/2018 11:00, Krutika Dhananjay ha scritto:
>
>
>
> On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl
> <l...@gvnet.it> wrote:
>
>> I
ted 3 times between [2018-01-16 15:18:07.154330] and
> [2018-01-16 15:18:07.154357]
> [2018-01-16 15:19:23.618794] W [MSGID: 113096]
> [posix-handle.c:770:posix_handle_hard]
> 0-gv2a2-posix: link
> /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
> ->
>
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhan...@redhat.com>
Please share the volume-info output and the logs under /var/log/glusterfs/
from all your nodes. for investigating the issue.
-Krutika
On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
l...@gvnet.it> wrote:
> Hi to everyone.
>
> I've got a strange problem with a gluster
trusted.io-stats-dump is a virtual (not physical) extended attribute.
The code is written in a way that a request to set trusted.io-stats-dump
gets bypassed at the io-stats translator layer on the stack and
there it gets converted into the action of dumping the statistics into the
provided output
ou for your reply.
> Lindsay,
> Uunfortunately i do not have backup for this template.
>
> Krutika,
> The stat-prefetch is already disabled on the volume.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay <kdhan...
Could you disable stat-prefetch on the volume and create another vm off
that template and see if it works?
-Krutika
On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Any chance of a backup you could do bit compare with?
>
>
>
> Sent from my Windows 10
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to
Could you disable cluster.eager-lock and try again?
-Krutika
On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial
> drop in read/write perfomance
>
> env:
>
> - 3 node, replica 3
i Krutika,
>
> Attached the profile stats. I enabled profiling then ran some dd tests.
> Also 3 Windows VMs are running on top this volume but did not do any stress
> testing on the VMs. I have left the profiling enabled in case more time is
> needed for useful stats.
>
> Thanx
&g
if=/dev/zero of=testfile bs=1G
> count=1 I get 65MB/s on the vms gluster volume (and the network traffic
> between the servers reaches ~ 500Mbps), while when testing with dd
> if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent
> 10MB/s and the network traffic hardly reachi
Hi,
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the aggregated
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of
+gluster-users ML
Hi,
I've responded to your bug report here -
https://bugzilla.redhat.com/show_bug.cgi?id=1473150#c3
Kindly let us know if the patch fixes your bug.
-Krutika
On Thu, Jul 20, 2017 at 3:12 PM, zhangjianwei1...@163.com <
zhangjianwei1...@163.com> wrote:
> Hi Krutika
gt;
> *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
> gluster.org] *On Behalf Of *gen...@gencgiyen.com
> *Sent:* Thursday, July 6, 2017 11:06 AM
>
> *To:* 'Krutika Dhananjay' <kdhan...@redhat.com>
> *Cc:* 'gluster-user' <gluster-users@gluster.org
What if you disabled eager lock and run your test again on the sharded
configuration along with the profile output?
# gluster volume set cluster.eager-lock off
-Krutika
On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Thanks. I think reusing the sa
;
> 5+0 records out
>
> 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 66.7978 s, 80.4 MB/s
>
>
>
>
>
> >> dd if=/dev/zero of=/mnt/testfile bs=5G count=1
>
> This also gives same result. (bs and count reversed)
>
>
>
>
>
> And this example hav
idea or
> suggestion?
>
>
>
> Thanks,
>
> -Gencer
>
>
>
> *From:* Krutika Dhananjay [mailto:kdhan...@redhat.com]
> *Sent:* Friday, June 30, 2017 3:50 PM
>
> *To:* gen...@gencgiyen.com
> *Cc:* gluster-user <gluster-users@gluster.org>
> *Subject:* Re: [Glu
oc-50-14-18:/bricks/brick6
>
> Brick17: sr-10-loc-50-14-18:/bricks/brick7
>
> Brick18: sr-10-loc-50-14-18:/bricks/brick8
>
> Brick19: sr-10-loc-50-14-18:/bricks/brick9
>
> Brick20: sr-10-loc-50-14-18:/bricks/brick10
>
> Options Reconfigured:
>
> features.s
Could you please provide the volume-info output?
-Krutika
On Fri, Jun 30, 2017 at 4:23 PM, wrote:
> Hi,
>
>
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>
>
> First test:
>
>
>
> 2 Nodes with Distributed – Striped – Replicated (2 x 2)
>
> 10GbE Speed between
No, you don't need to do any of that. Just executing volume-set commands is
sufficient for the changes to take effect.
-Krutika
On Wed, Jun 21, 2017 at 3:48 PM, Chris Boot <bo...@bootc.net> wrote:
> [replying to lists this time]
>
> On 20/06/17 11:23, Krutika Dhananjay wr
No. It's just that in the internal testing that was done here, increasing
the thread count beyond 4 did not improve the performance any further.
-Krutika
On Tue, Jun 20, 2017 at 11:30 PM, mabi wrote:
> Dear Krutika,
>
> Sorry for asking so naively but can you tell me on
Couple of things:
1. Like Darrell suggested, you should enable stat-prefetch and increase
client and server event threads to 4.
# gluster volume set performance.stat-prefetch on
# gluster volume set client.event-threads 4
# gluster volume set server.event-threads 4
2. Also glusterfs-3.10.1
ges/d59487fe-
> f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f
> (93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or directory]
>
>
>
> Although the process went smooth, i will run another extensive test
> tomorrow just to be sure.
>
> --
>
> Re
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Kr
Although the process went smooth, i will run another extensive test
> tomorrow just to be sure.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Monday, May 29, 2017 9:20:29 AM
>
> *To:* Mahdi Adnan
> *C
ached are the logs for both the rebalance and the mount.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> ------
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Friday, May 26, 2017 1:12:28 PM
> *To:* Mahdi Adnan
> *Cc:
; VMs started to fail after rebalancing.
>
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Wednesday, May 17, 2017 6:59:20 AM
> *To:* gluster-user
> *Cc:* Gandalf Corvotempesta; Lin
it's going to be updated ?
>
> Is there any other recommended place to get the latest rpms ?
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Mahdi Adnan <mahdi.ad...@outlook.com>
> *Sent:* Friday, May 19, 2017 6:14:05 PM
> *To:* Krut
Hi,
In the past couple of weeks, we've sent the following fixes concerning VM
corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051
These fixes are very much part of the latest 3.10.2 release.
Satheesaran within Red Hat
Shard translator is currently supported only for VM image store workload.
-Krutika
On Sun, May 14, 2017 at 12:50 AM, Benjamin Kingston
wrote:
> Hers's some log entries from nfs-ganesha gfapi
>
> [2017-05-13 19:02:54.105936] E [MSGID: 133010]
>
Niels,
Allesandro's configuration does not have shard enabled. So it has
definitely not got anything to do with shard not supporting seek fop.
Copy-pasting volume-info output from the first mail:
Volume Name: datastore2
Type: Replicate
Volume ID: c95ebb5f-6e04-4f09-91b9-bbbe63d83aea
Status:
The newly introduced "SEEK" fop seems to be failing at the bricks.
Adding Niels for his inputs/help.
-Krutika
On Mon, May 8, 2017 at 3:43 PM, Alessandro Briosi wrote:
> Hi all,
> I have sporadic VM going down which files are on gluster FS.
>
> If I look at the gluster logs
Yeah, there are a couple of cache consistency issues with performance
translators that are causing these exceptions.
Some of them were fixed by 3.10.1. Some still remain.
Alternatively you can give gluster-block + elasticsearch a try, which
doesn't require solving all these caching issues.
Here's
Hi,
Work is in progress for this (and going a bit slow at the moment because of
other priorities).
At the moment we support sharding only for VM image store use-case - most
common large file + single writer use case we know of.
Just curious, what is the use case where you want to use shard+EC?
Yes, that's about it. Pranith pretty much summed up whatever I would have
said.
-Krutika
On Sat, Apr 22, 2017 at 12:25 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> +Krutika for any other inputs you may need.
>
> On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri <
>
for more details.
-Krutika
On Tue, Apr 4, 2017 at 8:09 PM, Alessandro Briosi <a...@metalit.com> wrote:
> Il 04/04/2017 16:16, Krutika Dhananjay ha scritto:
> > So the corruption bug is seen iff your vms are online while fix-layout
> > and/or rebalance is going on.
> > Doe
So the corruption bug is seen iff your vms are online while fix-layout
and/or rebalance is going on.
Does that answer your question?
The same issue has now been root-caused and there will be a fix for it soon
by Raghavendra G.
-Krutika
On Mon, Apr 3, 2017 at 6:44 PM, Alessandro Briosi
Nope. This is a different bug.
-Krutika
On Mon, Apr 3, 2017 at 5:03 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> This is a good news
> Is this related to the previously fixed bug?
>
> Il 3 apr 2017 10:22 AM, "Krutika Dhananjay" <kd
n happen.
> Raghavendra(CCed) would be the right person to provide accurate update.
>
>>
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> --
>> *From:* Krutika Dhananjay <kdhan...@redhat.com>
>> *Sent:* Tuesday, March
ther volume is mounted using gfapi in oVirt cluster.
>
>
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Sunday, March 19, 2017 2:01:49 PM
>
> *To:* Mahdi Adnan
>
t; Respectfully
> *Mahdi A. Mahdi*
>
> ------
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Sunday, March 19, 2017 10:00:22 AM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster 3.8.10 reb
im
> testing now.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> ------
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Sunday, March 19, 2017 8:02:19 AM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.or
e? Or is it NFS?
-Krutika
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> ----------
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Saturday, March 18, 2017 6:10:40 PM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users
> doing a very basic routine procedure that should be considered as the
> basis of the whole gluster project (as wrote on gluster's homepage)
>
>
> 2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <kdhan...@redhat.com>:
> >
> >
> > On Sat, Mar 18, 2017
On Fri, Mar 10, 2017 at 4:09 PM, Cedric Lemarchand
wrote:
>
> > On 10 Mar 2017, at 10:33, Alessandro Briosi wrote:
> >
> > Il 10/03/2017 10:28, Kevin Lemonnier ha scritto:
> >>> I haven't done any test yet, but I was under the impression that
> >>> sharding
On Fri, Mar 10, 2017 at 3:03 PM, Alessandro Briosi wrote:
> Il 10/03/2017 10:28, Kevin Lemonnier ha scritto:
>
> I haven't done any test yet, but I was under the impression that
> sharding feature isn't so stable/mature yet.
> In the remote of my mind I remember reading
AM, Georgi Mirchev <gmirc...@usa.net> wrote:
>
> На 03/08/2017 в 03:37 PM, Krutika Dhananjay написа:
>
> Thanks for your feedback.
>
> May I know what was the shard-block-size?
>
> The shard size is 4 MB.
>
> One way to fix this would be to make shard transla
Thanks for your feedback.
May I know what was the shard-block-size?
One way to fix this would be to make shard translator delete only the base
file (0th shard) in the IO path and move
the deletion of the rest of the shards to background. I'll work on this.
-Krutika
On Fri, Mar 3, 2017 at 10:35
Hi Niels,
Care to merge the following two 3.8 backports:
https://review.gluster.org/16749 and
https://review.gluster.org/16750
and in that order. One of the users who'd reported this issue has confirmed
that the patch fixed the issue. So did Satheesaran.
-Krutika
On Wed, Feb 15, 2017 at 9:38 PM, Gambit15 wrote:
> Hey guys,
> I keep seeing different recommendations for the best shard sizes for VM
> images, from 64MB to 512MB.
>
> What's the benefit of smaller v larger shards?
> I'm guessing smaller shards are quicker to heal,
Could you please attach the "glfsheal-.log" logfile?
-Krutika
On Thu, Feb 16, 2017 at 12:05 AM, Andrea Fogazzi wrote:
> Hello,
>
> I have a gluster volume on 3.8.8 which has multiple volumes, each on
> distributed/replicated on 5 servers (2 replicas+1 quorum); each volume is
Could you describe what use-case you intend to use striping for?
-Krutika
On Tue, Jan 17, 2017 at 12:52 PM, Dave Fan wrote:
> Hello everyone,
>
> We are trying to set up a Gluster-based storage for best performance. On
> the official Gluster website. It says:
>
>
ld I find the fuse client log?
>
> On Fri, Nov 18, 2016 at 2:22 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
> > Could you attach the fuse client and brick logs?
> >
> > -Krutika
> >
> > On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert <
>
Could you attach the fuse client and brick logs?
-Krutika
On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small
Yes. I apologise for the delay.
Disabling sharding would knock the translator itself off the client stack,
and
being that sharding is the actual (and the only) translator that has the
knowledge of how to interpret sharded files, and how to aggregate them,
removing the translator from the stack
On Mon, Nov 14, 2016 at 8:24 PM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
Which data corruption issue is this? Could you point me to the bug report
on bugzilla?
-Krutika
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> > We've had a lot of
Hi,
Yes, this has been reported before by Lindsay Mathieson and Kevin Lemonnier
on this list.
We just found one issue with replace-brick that we recently fixed.
In your case, are you doing add-brick and changing the replica count (say
from 2 -> 3) or are you adding
"replica-count" number of
There is compound fops feature coming up which reduces the
number of calls over the network in AFR transactions, thereby
improving performance. It will be available in 3.9 (and latest
upstream master too) if you're interested to try it out but
DO NOT use it in production yet. It may have some
Awesome. Thanks for the logs. Will take a look.
-Krutika
On Sun, Oct 23, 2016 at 5:47 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 20/10/2016 9:13 PM, Krutika Dhananjay wrote:
>
>> It would be awesome if you could tell us whether you
>> see the issue
Thanks a lot, Lindsay! Appreciate the help.
It would be awesome if you could tell us whether you
see the issue with FUSE as well, while we get around
to setting up the environment and running the test ourselves.
-Krutika
On Thu, Oct 20, 2016 at 2:57 AM, Lindsay Mathieson <
Agreed.
I will run the same test on an actual vm setup one of these days and
see if I manage to recreate the issue (after I have completed some
of my long pending tasks). Meanwhile if any of you find a consistent simpler
test case to hit the issue, feel free to reply on this thread. At least I
had
rum-type: server
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> features.shard: on
>> features.shard-block-size: 64
Any errors/warnings in the glustershd logs?
-Krutika
On Sat, Oct 1, 2016 at 8:18 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> This was raised earlier but I don't believe it was ever resolved and it is
> becoming a serious issue for me.
>
>
> I'm doing rolling upgrades on our
ab7557d582484a068c3478e342069326 lastlog
-Krutika
On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Hi,
>
> What version of gluster are you using?
> Also, could you share your volume configuration (`gluster volume info`)?
>
> -Krutika
>
>
Hi,
What version of gluster are you using?
Also, could you share your volume configuration (`gluster volume info`)?
-Krutika
On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N
wrote:
> On 09/28/2016 12:16 AM, ML Wong wrote:
>
> Hello Ravishankar,
> Thanks for introducing
On Tue, Sep 6, 2016 at 7:27 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> Going to top post with solution Krutika Dhananjay came up with. His steps
> were much less volatile and could be done with volume still being actively
> used and also much less prone to acciden
Theoretically whatever you said is correct (at least from shard's
perspective).
Adding Rafi who's worked on tiering to know if he thinks otherwise.
It must be mentioned that sharding + tiering hasn't been tested as such
till now by us at least.
Did you try it? If so, what was your experience?
Could you please attach the glusterfs client and brick logs?
Also provide output of `gluster volume info`.
-Krutika
On Tue, Sep 6, 2016 at 4:29 AM, Kevin Lemonnier
wrote:
> >- What was the original (and current) geometry? (status and info)
>
> It was a 1x3 that I was
.dht=0x0001
> trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15
> user.some-name=0x736f6d652d76616c7565
>
> meta-data split-brain? heal <> info split-brain shows no files or
> entries. If I had thought ahead I would have
No, sorry, it's working fine. I may have missed some step because of which
i saw that problem. /.shard is also healing fine now.
Let me know if it works for you.
-Krutika
On Wed, Aug 31, 2016 at 12:49 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> OK I just hit the other issue t
1 - 100 of 274 matches
Mail list logo