Re: [Gluster-users] [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

2019-12-01 Thread Krutika Dhananjay
Sorry about the late response. I looked at the logs. These errors are originating from posix-acl translator - *[2019-11-17 07:55:47.090065] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162496: LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.6

Re: [Gluster-users] Reg Performance issue in GlusterFS

2019-10-07 Thread Krutika Dhananjay
Hi Pratik, What's the version of gluster you're running? Also, what would help is the volume profile output. Here's what you should do to capture it: # gluster volume profile start # run_your_workload # gluster volume profile info > brick-profile.out And attach brick-profile.out here.

Re: [Gluster-users] General questions

2019-06-21 Thread Krutika Dhananjay
Adding (back) gluster-users. -Krutika On Fri, Jun 21, 2019 at 1:09 PM Krutika Dhananjay wrote: > > > On Fri, Jun 21, 2019 at 12:43 PM Cristian Del Carlo < > cristian.delca...@targetsolutions.it> wrote: > >> Thanks Strahil, >> >> in this link >>

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-21 Thread Krutika Dhananjay
explain what remote-dio and strict-o-direct variables changed in > behaviour of my Gluster? It would be great for later archive/users to > understand what and why this solved my issue. > > Anyway, Thanks a LOT!!! > > BR, > Martin > > On 13 May 2019, at 10:20, Krutika Dhananjay wr

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
/send profiling info after some VM will be failed. I > suppose this is correct profiling strategy. > About this, how many vms do you need to recreate it? A single vm? Or multiple vms doing IO in parallel? > Thanks, > BR! > Martin > > On 13 May 2019, at 09:21, Kr

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
Also, what's the caching policy that qemu is using on the affected vms? Is it cache=none? Or something else? You can get this information in the command line of qemu-kvm process corresponding to your vm in the ps output. -Krutika On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay wrote: > W

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread Krutika Dhananjay
What version of gluster are you using? Also, can you capture and share volume-profile output for a run where you manage to recreate this issue? https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command Let me know if you have any

Re: [Gluster-users] Settings for VM hosting

2019-04-22 Thread Krutika Dhananjay
On Fri, Apr 19, 2019 at 12:48 PM wrote: > On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote: > > Looks good mostly. > > You can also turn on performance.stat-prefetch, and also set > > Ah the corruption bug has been fixed, I missed that. Great ! >

Re: [Gluster-users] Settings for VM hosting

2019-04-18 Thread Krutika Dhananjay
Looks good mostly. You can also turn on performance.stat-prefetch, and also set client.event-threads and server.event-threads to 4. And if your bricks are on ssds, then you could also enable performance.client-io-threads. And if your bricks and hypervisors are on same set of machines

Re: [Gluster-users] [ovirt-users] Re: Announcing Gluster release 5.5

2019-03-31 Thread Krutika Dhananjay
Adding back gluster-users Comments inline ... On Fri, Mar 29, 2019 at 8:11 PM Olaf Buitelaar wrote: > Dear Krutika, > > > > 1. I’ve made 2 profile runs of around 10 minutes (see files > profile_data.txt and profile_data2.txt). Looking at it, most time seems be > spent at the fop’s fsync and

Re: [Gluster-users] [ovirt-users] Re: Announcing Gluster release 5.5

2019-03-29 Thread Krutika Dhananjay
Questions/comments inline ... On Thu, Mar 28, 2019 at 10:18 PM wrote: > Dear All, > > I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While > previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a > different experience. After first trying a test upgrade on a 3

Re: [Gluster-users] [ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-27 Thread Krutika Dhananjay
s really went down - performing inside vm fio tests. > > On Wed, Mar 27, 2019, 07:03 Krutika Dhananjay wrote: > >> Could you enable strict-o-direct and disable remote-dio on the src volume >> as well, restart the vms on "old" and retry migration? >> >> # gluster v

Re: [Gluster-users] [ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-26 Thread Krutika Dhananjay
jen wrote: > On 26-03-19 14:23, Sahina Bose wrote: > > +Krutika Dhananjay and gluster ml > > > > On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen > wrote: > >> Hello, > >> > >> tl;dr We have disk corruption when doing live storage migration on oVirt &

Re: [Gluster-users] [ovirt-users] Tracking down high writes in GlusterFS volume

2019-02-25 Thread Krutika Dhananjay
On Fri, Feb 15, 2019 at 12:30 AM Jayme wrote: > Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage. > I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high > on one particular drive. Specifically I've noticed one volume is much much > higher total bytes

Re: [Gluster-users] [Stale file handle] in shard volume

2019-01-13 Thread Krutika Dhananjay
Hi, So the main issue is that certain vms seem to be pausing? Did I understand that right? Could you share the gluster-mount logs around the time the pause was seen? And the brick logs too please? As for ESTALE errors, the real cause of pauses can be determined from errors/warnings logged by

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Krutika Dhananjay
info. > > After a long time the preallocated disk has been created properly. It was > a 1TB disk on a hdd pool so a bit of delay was expected. > > But it took a bit longer then expected. The disk had no other virtual > disks on it. Is there something I can tweak or check for this? > >

Re: [Gluster-users] posix_handle_hard [file exists]

2018-11-05 Thread Krutika Dhananjay
s on it. Is there something I can tweak or check for this? > > Regards, Jorick > > On 10/31/2018 01:10 PM, Krutika Dhananjay wrote: > > These log messages represent a transient state and are harmless and can be > ignored. This happens when a lookup and mknod to c

Re: [Gluster-users] posix_handle_hard [file exists]

2018-10-31 Thread Krutika Dhananjay
These log messages represent a transient state and are harmless and can be ignored. This happens when a lookup and mknod to create shards happen in parallel. Regarding the preallocated disk creation issue, could you check if there are any errors/warnings in the fuse mount logs (these are named as

Re: [Gluster-users] sharding in glusterfs

2018-10-05 Thread Krutika Dhananjay
Hi, Apologies for the late reply. My email filters are messed up, I missed reading this. Answers to questions around shard algorithm inline ... On Sun, Sep 30, 2018 at 9:54 PM Ashayam Gupta wrote: > Hi Pranith, > > Thanks for you reply, it would be helpful if you can please help us with > the

Re: [Gluster-users] Fwd: vm paused unknown storage error one node out of 3 only

2018-06-12 Thread Krutika Dhananjay
a16c-1d07-4153-8d01-b9b0ffd9d19b.16177) > ==> (No data available) [No data available] > > if so what does it mean? > > Dan > > On Tue, Aug 16, 2016 at 1:21 AM, Krutika Dhananjay > wrote: > >> Thanks, I just sent http://review.gluster.org/#/c/15161/1 to reduc

Re: [Gluster-users] Current bug for VM hosting with 3.12 ?

2018-06-12 Thread Krutika Dhananjay
Could you share the gluster brick and mount logs? -Krutika On Mon, Jun 11, 2018 at 2:14 PM, wrote: > Hi, > > Given the numerous problems we've had with setting up gluster for VM > hosting at the start, we've been staying with 3.7.15, which was the > first version to work properly. > > However

Re: [Gluster-users] [ovirt-users] Re: Gluster problems, cluster performance issues

2018-05-29 Thread Krutika Dhananjay
Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit

Re: [Gluster-users] Reconstructing files from shards

2018-04-26 Thread Krutika Dhananjay
The short answer is - no there exists no script currently that can piece the shards together into a single file. Long answer: IMO the safest way to convert from sharded to a single file _is_ by copying the data out into a new volume at the moment. Picking up the files from the individual bricks

Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-13 Thread Krutika Dhananjay
Sorry about the late reply, I missed seeing your mail. To begin with, what is your use-case? Sharding is currently supported only for virtual machine image storage use-case. It *could* work in other single-writer use-cases but it's only tested thoroughly for the vm use-case. If yours is not a vm

Re: [Gluster-users] Enable sharding on active volume

2018-04-04 Thread Krutika Dhananjay
On Thu, Apr 5, 2018 at 7:33 AM, Ian Halliday wrote: > Hello, > > I wanted to post this as a question to the group before we go launch it in > a test environment. Will Gluster handle enabling sharding on an existing > distributed-replicated environment, and is it safe to do?

Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids

2018-03-26 Thread Krutika Dhananjay
The gfid mismatch here is between the shard and its "link-to" file, the creation of which happens at a layer below that of shard translator on the stack. Adding DHT devs to take a look. -Krutika On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday wrote: > Hello all, > > We are

Re: [Gluster-users] Stale locks on shards

2018-01-22 Thread Krutika Dhananjay
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on

Re: [Gluster-users] [Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding

2018-01-18 Thread Krutika Dhananjay
ard.block-size=0x0400 > trusted.glusterfs.shard.file-size=0xc153 > 0060be90 > > Hope this helps. > > > > Il 17/01/2018 11:37, Ing. Luca Lazzeroni - Trend Servizi Srl ha scritto: > > I actually use FUSE an

Re: [Gluster-users] [Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding

2018-01-17 Thread Krutika Dhananjay
ntly without problems. I've installed 4 vm and updated them > without problems. > > > > Il 17/01/2018 11:00, Krutika Dhananjay ha scritto: > > > > On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl > <l...@gvnet.it> wrote: > >> I

Re: [Gluster-users] [Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding

2018-01-17 Thread Krutika Dhananjay
ted 3 times between [2018-01-16 15:18:07.154330] and > [2018-01-16 15:18:07.154357] > [2018-01-16 15:19:23.618794] W [MSGID: 113096] > [posix-handle.c:770:posix_handle_hard] > 0-gv2a2-posix: link > /bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21 > -> >

Re: [Gluster-users] Problem with Gluster 3.12.4, VM and sharding

2018-01-16 Thread Krutika Dhananjay
Also to help isolate the component, could you answer these: 1. on a different volume with shard not enabled, do you see this issue? 2. on a plain 3-way replicated volume (no arbiter), do you see this issue? On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhan...@redhat.com>

Re: [Gluster-users] Problem with Gluster 3.12.4, VM and sharding

2018-01-16 Thread Krutika Dhananjay
Please share the volume-info output and the logs under /var/log/glusterfs/ from all your nodes. for investigating the issue. -Krutika On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl < l...@gvnet.it> wrote: > Hi to everyone. > > I've got a strange problem with a gluster

Re: [Gluster-users] What is it with trusted.io-stats-dump?

2017-11-13 Thread Krutika Dhananjay
trusted.io-stats-dump is a virtual (not physical) extended attribute. The code is written in a way that a request to set trusted.io-stats-dump gets bypassed at the io-stats translator layer on the stack and there it gets converted into the action of dumping the statistics into the provided output

Re: [Gluster-users] Gluster 3.8.13 data corruption

2017-10-09 Thread Krutika Dhananjay
ou for your reply. > Lindsay, > Uunfortunately i do not have backup for this template. > > Krutika, > The stat-prefetch is already disabled on the volume. > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Krutika Dhananjay <kdhan...

Re: [Gluster-users] Gluster 3.8.13 data corruption

2017-10-05 Thread Krutika Dhananjay
Could you disable stat-prefetch on the volume and create another vm off that template and see if it works? -Krutika On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > Any chance of a backup you could do bit compare with? > > > > Sent from my Windows 10

Re: [Gluster-users] data corruption - any update?

2017-10-04 Thread Krutika Dhananjay
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran wrote: > > > On 3 October 2017 at 13:27, Gandalf Corvotempesta < > gandalf.corvotempe...@gmail.com> wrote: > >> Any update about multiple bugs regarding data corruptions with >> sharding enabled ? >> >> Is 3.12.1 ready to

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread Krutika Dhananjay
Could you disable cluster.eager-lock and try again? -Krutika On Thu, Sep 21, 2017 at 6:31 PM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > Upgraded recently from 3.8.15 to 3.10.5 and have seen a fairly substantial > drop in read/write perfomance > > env: > > - 3 node, replica 3

Re: [Gluster-users] Slow performance of gluster volume

2017-09-06 Thread Krutika Dhananjay
i Krutika, > > Attached the profile stats. I enabled profiling then ran some dd tests. > Also 3 Windows VMs are running on top this volume but did not do any stress > testing on the VMs. I have left the profiling enabled in case more time is > needed for useful stats. > > Thanx &g

Re: [Gluster-users] Slow performance of gluster volume

2017-09-05 Thread Krutika Dhananjay
if=/dev/zero of=testfile bs=1G > count=1 I get 65MB/s on the vms gluster volume (and the network traffic > between the servers reaches ~ 500Mbps), while when testing with dd > if=/dev/zero of=testfile bs=1G count=1 *oflag=direct *I get a consistent > 10MB/s and the network traffic hardly reachi

Re: [Gluster-users] Poor performance with shard

2017-09-04 Thread Krutika Dhananjay
Hi, Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the aggregated

Re: [Gluster-users] Slow performance of gluster volume

2017-09-04 Thread Krutika Dhananjay
I'm assuming you are using this volume to store vm images, because I see shard in the options list. Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of

Re: [Gluster-users] Bug 1473150 - features/shard:Lookup on shard 18 failed. Base file gfid = b00f5de2-d811-44fe-80e5-1f382908a55a [No data available], the [No data available]

2017-07-24 Thread Krutika Dhananjay
+gluster-users ML Hi, I've responded to your bug report here - https://bugzilla.redhat.com/show_bug.cgi?id=1473150#c3 Kindly let us know if the patch fixes your bug. -Krutika On Thu, Jul 20, 2017 at 3:12 PM, zhangjianwei1...@163.com < zhangjianwei1...@163.com> wrote: > Hi Krutika

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-07-12 Thread Krutika Dhananjay
gt; > *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@ > gluster.org] *On Behalf Of *gen...@gencgiyen.com > *Sent:* Thursday, July 6, 2017 11:06 AM > > *To:* 'Krutika Dhananjay' <kdhan...@redhat.com> > *Cc:* 'gluster-user' <gluster-users@gluster.org

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-07-05 Thread Krutika Dhananjay
What if you disabled eager lock and run your test again on the sharded configuration along with the profile output? # gluster volume set cluster.eager-lock off -Krutika On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Thanks. I think reusing the sa

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-07-04 Thread Krutika Dhananjay
; > 5+0 records out > > 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 66.7978 s, 80.4 MB/s > > > > > > >> dd if=/dev/zero of=/mnt/testfile bs=5G count=1 > > This also gives same result. (bs and count reversed) > > > > > > And this example hav

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-07-04 Thread Krutika Dhananjay
idea or > suggestion? > > > > Thanks, > > -Gencer > > > > *From:* Krutika Dhananjay [mailto:kdhan...@redhat.com] > *Sent:* Friday, June 30, 2017 3:50 PM > > *To:* gen...@gencgiyen.com > *Cc:* gluster-user <gluster-users@gluster.org> > *Subject:* Re: [Glu

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-06-30 Thread Krutika Dhananjay
oc-50-14-18:/bricks/brick6 > > Brick17: sr-10-loc-50-14-18:/bricks/brick7 > > Brick18: sr-10-loc-50-14-18:/bricks/brick8 > > Brick19: sr-10-loc-50-14-18:/bricks/brick9 > > Brick20: sr-10-loc-50-14-18:/bricks/brick10 > > Options Reconfigured: > > features.s

Re: [Gluster-users] Very slow performance on Sharded GlusterFS

2017-06-30 Thread Krutika Dhananjay
Could you please provide the volume-info output? -Krutika On Fri, Jun 30, 2017 at 4:23 PM, wrote: > Hi, > > > > I have an 2 nodes with 20 bricks in total (10+10). > > > > First test: > > > > 2 Nodes with Distributed – Striped – Replicated (2 x 2) > > 10GbE Speed between

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No, you don't need to do any of that. Just executing volume-set commands is sufficient for the changes to take effect. -Krutika On Wed, Jun 21, 2017 at 3:48 PM, Chris Boot <bo...@bootc.net> wrote: > [replying to lists this time] > > On 20/06/17 11:23, Krutika Dhananjay wr

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No. It's just that in the internal testing that was done here, increasing the thread count beyond 4 did not improve the performance any further. -Krutika On Tue, Jun 20, 2017 at 11:30 PM, mabi wrote: > Dear Krutika, > > Sorry for asking so naively but can you tell me on

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Krutika Dhananjay
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set performance.stat-prefetch on # gluster volume set client.event-threads 4 # gluster volume set server.event-threads 4 2. Also glusterfs-3.10.1

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-06-06 Thread Krutika Dhananjay
ges/d59487fe- > f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f > (93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or directory] > > > > Although the process went smooth, i will run another extensive test > tomorrow just to be sure. > > -- > > Re

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-06-04 Thread Krutika Dhananjay
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Kr

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-29 Thread Krutika Dhananjay
Although the process went smooth, i will run another extensive test > tomorrow just to be sure. > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Monday, May 29, 2017 9:20:29 AM > > *To:* Mahdi Adnan > *C

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-29 Thread Krutika Dhananjay
ached are the logs for both the rebalance and the mount. > > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------ > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Friday, May 26, 2017 1:12:28 PM > *To:* Mahdi Adnan > *Cc:

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-26 Thread Krutika Dhananjay
; VMs started to fail after rebalancing. > > > > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Wednesday, May 17, 2017 6:59:20 AM > *To:* gluster-user > *Cc:* Gandalf Corvotempesta; Lin

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-20 Thread Krutika Dhananjay
it's going to be updated ? > > Is there any other recommended place to get the latest rpms ? > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Mahdi Adnan <mahdi.ad...@outlook.com> > *Sent:* Friday, May 19, 2017 6:14:05 PM > *To:* Krut

[Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-16 Thread Krutika Dhananjay
Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051 These fixes are very much part of the latest 3.10.2 release. Satheesaran within Red Hat

Re: [Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-15 Thread Krutika Dhananjay
Shard translator is currently supported only for VM image store workload. -Krutika On Sun, May 14, 2017 at 12:50 AM, Benjamin Kingston wrote: > Hers's some log entries from nfs-ganesha gfapi > > [2017-05-13 19:02:54.105936] E [MSGID: 133010] >

Re: [Gluster-users] VM going down

2017-05-11 Thread Krutika Dhananjay
Niels, Allesandro's configuration does not have shard enabled. So it has definitely not got anything to do with shard not supporting seek fop. Copy-pasting volume-info output from the first mail: Volume Name: datastore2 Type: Replicate Volume ID: c95ebb5f-6e04-4f09-91b9-bbbe63d83aea Status:

Re: [Gluster-users] VM going down

2017-05-08 Thread Krutika Dhananjay
The newly introduced "SEEK" fop seems to be failing at the bricks. Adding Niels for his inputs/help. -Krutika On Mon, May 8, 2017 at 3:43 PM, Alessandro Briosi wrote: > Hi all, > I have sporadic VM going down which files are on gluster FS. > > If I look at the gluster logs

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-05 Thread Krutika Dhananjay
Yeah, there are a couple of cache consistency issues with performance translators that are causing these exceptions. Some of them were fixed by 3.10.1. Some still remain. Alternatively you can give gluster-block + elasticsearch a try, which doesn't require solving all these caching issues. Here's

Re: [Gluster-users] [Gluster-devel] Enabling shard on EC

2017-05-05 Thread Krutika Dhananjay
Hi, Work is in progress for this (and going a bit slow at the moment because of other priorities). At the moment we support sharding only for VM image store use-case - most common large file + single writer use case we know of. Just curious, what is the use case where you want to use shard+EC?

Re: [Gluster-users] GlusterFS Shard Feature: Max number of files in .shard-Folder

2017-04-24 Thread Krutika Dhananjay
Yes, that's about it. Pranith pretty much summed up whatever I would have said. -Krutika On Sat, Apr 22, 2017 at 12:25 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > +Krutika for any other inputs you may need. > > On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri < >

Re: [Gluster-users] adding arbiter

2017-04-04 Thread Krutika Dhananjay
for more details. -Krutika On Tue, Apr 4, 2017 at 8:09 PM, Alessandro Briosi <a...@metalit.com> wrote: > Il 04/04/2017 16:16, Krutika Dhananjay ha scritto: > > So the corruption bug is seen iff your vms are online while fix-layout > > and/or rebalance is going on. > > Doe

Re: [Gluster-users] adding arbiter

2017-04-04 Thread Krutika Dhananjay
So the corruption bug is seen iff your vms are online while fix-layout and/or rebalance is going on. Does that answer your question? The same issue has now been root-caused and there will be a fix for it soon by Raghavendra G. -Krutika On Mon, Apr 3, 2017 at 6:44 PM, Alessandro Briosi

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-04-04 Thread Krutika Dhananjay
Nope. This is a different bug. -Krutika On Mon, Apr 3, 2017 at 5:03 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > This is a good news > Is this related to the previously fixed bug? > > Il 3 apr 2017 10:22 AM, "Krutika Dhananjay" <kd

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-04-03 Thread Krutika Dhananjay
n happen. > Raghavendra(CCed) would be the right person to provide accurate update. > >> >> >> -- >> >> Respectfully >> *Mahdi A. Mahdi* >> >> -- >> *From:* Krutika Dhananjay <kdhan...@redhat.com> >> *Sent:* Tuesday, March

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-20 Thread Krutika Dhananjay
ther volume is mounted using gfapi in oVirt cluster. > > > > > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Sunday, March 19, 2017 2:01:49 PM > > *To:* Mahdi Adnan >

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Krutika Dhananjay
t; Respectfully > *Mahdi A. Mahdi* > > ------ > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Sunday, March 19, 2017 10:00:22 AM > > *To:* Mahdi Adnan > *Cc:* gluster-users@gluster.org > *Subject:* Re: [Gluster-users] Gluster 3.8.10 reb

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Krutika Dhananjay
im > testing now. > > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------ > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Sunday, March 19, 2017 8:02:19 AM > > *To:* Mahdi Adnan > *Cc:* gluster-users@gluster.or

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
e? Or is it NFS? -Krutika > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ---------- > *From:* Krutika Dhananjay <kdhan...@redhat.com> > *Sent:* Saturday, March 18, 2017 6:10:40 PM > > *To:* Mahdi Adnan > *Cc:* gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
> doing a very basic routine procedure that should be considered as the > basis of the whole gluster project (as wrote on gluster's homepage) > > > 2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <kdhan...@redhat.com>: > > > > > > On Sat, Mar 18, 2017

Re: [Gluster-users] Sharding?

2017-03-10 Thread Krutika Dhananjay
On Fri, Mar 10, 2017 at 4:09 PM, Cedric Lemarchand wrote: > > > On 10 Mar 2017, at 10:33, Alessandro Briosi wrote: > > > > Il 10/03/2017 10:28, Kevin Lemonnier ha scritto: > >>> I haven't done any test yet, but I was under the impression that > >>> sharding

Re: [Gluster-users] Sharding?

2017-03-10 Thread Krutika Dhananjay
On Fri, Mar 10, 2017 at 3:03 PM, Alessandro Briosi wrote: > Il 10/03/2017 10:28, Kevin Lemonnier ha scritto: > > I haven't done any test yet, but I was under the impression that > sharding feature isn't so stable/mature yet. > In the remote of my mind I remember reading

Re: [Gluster-users] Deleting huge file from glusterfs hangs the cluster for a while

2017-03-09 Thread Krutika Dhananjay
AM, Georgi Mirchev <gmirc...@usa.net> wrote: > > На 03/08/2017 в 03:37 PM, Krutika Dhananjay написа: > > Thanks for your feedback. > > May I know what was the shard-block-size? > > The shard size is 4 MB. > > One way to fix this would be to make shard transla

Re: [Gluster-users] Deleting huge file from glusterfs hangs the cluster for a while

2017-03-08 Thread Krutika Dhananjay
Thanks for your feedback. May I know what was the shard-block-size? One way to fix this would be to make shard translator delete only the base file (0th shard) in the IO path and move the deletion of the rest of the shards to background. I'll work on this. -Krutika On Fri, Mar 3, 2017 at 10:35

[Gluster-users] Fixes to VM pause issues upon add-brick

2017-03-02 Thread Krutika Dhananjay
Hi Niels, Care to merge the following two 3.8 backports: https://review.gluster.org/16749 and https://review.gluster.org/16750 and in that order. One of the users who'd reported this issue has confirmed that the patch fixed the issue. So did Satheesaran. -Krutika

Re: [Gluster-users] Optimal shard size & self-heal algorithm for VM hosting?

2017-02-16 Thread Krutika Dhananjay
On Wed, Feb 15, 2017 at 9:38 PM, Gambit15 wrote: > Hey guys, > I keep seeing different recommendations for the best shard sizes for VM > images, from 64MB to 512MB. > > What's the benefit of smaller v larger shards? > I'm guessing smaller shards are quicker to heal,

Re: [Gluster-users] Error while setting cluster.granular-entry-heal

2017-02-16 Thread Krutika Dhananjay
Could you please attach the "glfsheal-.log" logfile? -Krutika On Thu, Feb 16, 2017 at 12:05 AM, Andrea Fogazzi wrote: > Hello, > > I have a gluster volume on 3.8.8 which has multiple volumes, each on > distributed/replicated on 5 servers (2 replicas+1 quorum); each volume is

Re: [Gluster-users] When to use striped volumes?

2017-01-17 Thread Krutika Dhananjay
Could you describe what use-case you intend to use striping for? -Krutika On Tue, Jan 17, 2017 at 12:52 PM, Dave Fan wrote: > Hello everyone, > > We are trying to set up a Gluster-based storage for best performance. On > the official Gluster website. It says: > >

Re: [Gluster-users] corruption using gluster and iSCSI with LIO

2016-11-18 Thread Krutika Dhananjay
ld I find the fuse client log? > > On Fri, Nov 18, 2016 at 2:22 AM, Krutika Dhananjay <kdhan...@redhat.com> > wrote: > > Could you attach the fuse client and brick logs? > > > > -Krutika > > > > On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert < >

Re: [Gluster-users] corruption using gluster and iSCSI with LIO

2016-11-17 Thread Krutika Dhananjay
Could you attach the fuse client and brick logs? -Krutika On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert wrote: > Okay, used the exact same config you provided, and adding an arbiter > node (node3) > > After halting node2, VM continues to work after a small

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-14 Thread Krutika Dhananjay
Yes. I apologise for the delay. Disabling sharding would knock the translator itself off the client stack, and being that sharding is the actual (and the only) translator that has the knowledge of how to interpret sharded files, and how to aggregate them, removing the translator from the stack

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-14 Thread Krutika Dhananjay
On Mon, Nov 14, 2016 at 8:24 PM, Niels de Vos wrote: > On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote: > > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta < > > gandalf.corvotempe...@gmail.com> wrote: > > > > > 2016-11-14 11:50 GMT+01:00 Pranith

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-14 Thread Krutika Dhananjay
Which data corruption issue is this? Could you point me to the bug report on bugzilla? -Krutika On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto: > > We've had a lot of

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-11 Thread Krutika Dhananjay
Hi, Yes, this has been reported before by Lindsay Mathieson and Kevin Lemonnier on this list. We just found one issue with replace-brick that we recently fixed. In your case, are you doing add-brick and changing the replica count (say from 2 -> 3) or are you adding "replica-count" number of

Re: [Gluster-users] Improving IOPS

2016-11-03 Thread Krutika Dhananjay
There is compound fops feature coming up which reduces the number of calls over the network in AFR transactions, thereby improving performance. It will be available in 3.9 (and latest upstream master too) if you're interested to try it out but DO NOT use it in production yet. It may have some

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-22 Thread Krutika Dhananjay
Awesome. Thanks for the logs. Will take a look. -Krutika On Sun, Oct 23, 2016 at 5:47 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 20/10/2016 9:13 PM, Krutika Dhananjay wrote: > >> It would be awesome if you could tell us whether you >> see the issue

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-20 Thread Krutika Dhananjay
Thanks a lot, Lindsay! Appreciate the help. It would be awesome if you could tell us whether you see the issue with FUSE as well, while we get around to setting up the environment and running the test ourselves. -Krutika On Thu, Oct 20, 2016 at 2:57 AM, Lindsay Mathieson <

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-19 Thread Krutika Dhananjay
Agreed. I will run the same test on an actual vm setup one of these days and see if I manage to recreate the issue (after I have completed some of my long pending tasks). Meanwhile if any of you find a consistent simpler test case to hit the issue, feel free to reply on this thread. At least I had

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-16 Thread Krutika Dhananjay
rum-type: server >> network.remote-dio: enable >> cluster.eager-lock: enable >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> features.shard: on >> features.shard-block-size: 64

Re: [Gluster-users] Healing Delays

2016-10-01 Thread Krutika Dhananjay
Any errors/warnings in the glustershd logs? -Krutika On Sat, Oct 1, 2016 at 8:18 PM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > This was raised earlier but I don't believe it was ever resolved and it is > becoming a serious issue for me. > > > I'm doing rolling upgrades on our

Re: [Gluster-users] File Size and Brick Size

2016-09-27 Thread Krutika Dhananjay
ab7557d582484a068c3478e342069326 lastlog -Krutika On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > Hi, > > What version of gluster are you using? > Also, could you share your volume configuration (`gluster volume info`)? > > -Krutika > >

Re: [Gluster-users] File Size and Brick Size

2016-09-27 Thread Krutika Dhananjay
Hi, What version of gluster are you using? Also, could you share your volume configuration (`gluster volume info`)? -Krutika On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N wrote: > On 09/28/2016 12:16 AM, ML Wong wrote: > > Hello Ravishankar, > Thanks for introducing

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-06 Thread Krutika Dhananjay
On Tue, Sep 6, 2016 at 7:27 PM, David Gossage <dgoss...@carouselchecks.com> wrote: > Going to top post with solution Krutika Dhananjay came up with. His steps > were much less volatile and could be done with volume still being actively > used and also much less prone to acciden

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Krutika Dhananjay
Theoretically whatever you said is correct (at least from shard's perspective). Adding Rafi who's worked on tiering to know if he thinks otherwise. It must be mentioned that sharding + tiering hasn't been tested as such till now by us at least. Did you try it? If so, what was your experience?

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-09-05 Thread Krutika Dhananjay
Could you please attach the glusterfs client and brick logs? Also provide output of `gluster volume info`. -Krutika On Tue, Sep 6, 2016 at 4:29 AM, Kevin Lemonnier wrote: > >- What was the original (and current) geometry? (status and info) > > It was a 1x3 that I was

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-08-31 Thread Krutika Dhananjay
.dht=0x0001 > trusted.glusterfs.volume-id=0x5889332e50ba441e8fa5cce3ae6f3a15 > user.some-name=0x736f6d652d76616c7565 > > meta-data split-brain? heal <> info split-brain shows no files or > entries. If I had thought ahead I would have

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-08-31 Thread Krutika Dhananjay
No, sorry, it's working fine. I may have missed some step because of which i saw that problem. /.shard is also healing fine now. Let me know if it works for you. -Krutika On Wed, Aug 31, 2016 at 12:49 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote: > OK I just hit the other issue t

  1   2   3   >