Re: [Gluster-users] GFS performance under heavy traffic

2019-12-17 Thread Raghavendra Gowdappa
What version of Glusterfs are you using? Though, not sure what's the root cause of your problem, just wanted to point out a bug with read-ahead which would cause read-amplification over network [1][2], which should be fixed in recent versions. [1]

Re: [Gluster-users] glusterfs7 client memory leak found

2019-12-03 Thread Raghavendra Gowdappa
Thanks Cynthia. I am adding gluster-users/gluster-devel to get more visibility. +Gluster-users +gluster-devel On Tue, Dec 3, 2019 at 1:21 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi ! > > Good day! > > During my recent test for glusterfs 7.0 I find

Re: [Gluster-users] [Gluster-devel] "rpc_clnt_ping_timer_expired" errors

2019-11-30 Thread Raghavendra Gowdappa
adding gluster-users and glusterdevel as the discussion has some generic points +Gluster-users +Gluster Devel On Mon, Mar 4, 2019 at 11:43 PM Raghavendra Gowdappa wrote: > > > On Mon, Mar 4, 2019 at 11:26 PM Yaniv Kaul wrote: > >> Is it that busy that it cannot reply f

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Raghavendra Gowdappa
On Tue, Jul 30, 2019 at 9:04 AM Diego Remolina wrote: > Will this kill the actual process or simply trigger the dump? > This - kill -SIGUSR1 ... - will deliver signal SIGUSR1 to the process. Glusterfs processes (bricks, client mount) are implemented to do a statedump on receiving SIGUSR1.

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Raghavendra Gowdappa
On Tue, Jul 30, 2019 at 5:44 AM Diego Remolina wrote: > Unfortunately statedump crashes on both machines, even freshly rebooted. > Does sending SIGUSR1 to individual processes also crash? # kill -SIGUSR1 > [root@ysmha01 ~]# gluster --print-statedumpdir > /var/run/gluster > [root@ysmha01 ~]#

Re: [Gluster-users] write request hung in write-behind

2019-06-03 Thread Raghavendra Gowdappa
t; 771 > [root@rhel-201 35]# grep -rn "global.callpool.stack.*.frame.1" -A 5 > glusterdump.20106.dump.1559038081 |grep translator | grep replicate-7 | > wc -l > 2 > [root@rhel-201 35]# grep -rn "global.callpool.stack.*.frame.1" -A 5 > glusterdump.20106.dump.15590380

Re: [Gluster-users] write request hung in write-behind

2019-06-03 Thread Raghavendra Gowdappa
On Mon, Jun 3, 2019 at 11:57 AM Xie Changlong wrote: > Hi all > > Test gluster 3.8.4-54.15 gnfs, i saw a write request hung in write-behind > followed by 1545 FLUSH requests. I found a similar > bugfix https://bugzilla.redhat.com/show_bug.cgi?id=1626787, but not sure > if it's the right one. > >

Re: [Gluster-users] Settings for VM hosting

2019-04-20 Thread Raghavendra Gowdappa
On Fri, Apr 19, 2019 at 12:48 PM wrote: > On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote: > > Looks good mostly. > > You can also turn on performance.stat-prefetch, and also set > > Ah the corruption bug has been fixed, I missed that. Great ! > Do you have details or bug

Re: [Gluster-users] Write Speed unusually slow when both bricks are online

2019-04-10 Thread Raghavendra Gowdappa
I would need following data: * client and brick volume profile - https://glusterdocs.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/ * cmdline of exact test you were running regards, On Wed, Apr 10, 2019 at 9:02 PM Jeff Forbes wrote: > I have two CentOS-6 servers running

Re: [Gluster-users] Transport endpoint is not connected failures in

2019-03-30 Thread Raghavendra Gowdappa
On Sat, Mar 30, 2019 at 1:18 AM wrote: > Hello, > > > > Yes I did find some hits on this in the following logs. We started seeing > failures after upgrading to 5.3 from 4.6. > There are no relevant fixes for ping timer expiry between 5.5 and 5.3. So, I attribute the failures not being seen to

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-28 Thread Raghavendra Gowdappa
ased on slow > hardware in the use case of small files (images). > > > Thx, > Hubert > > Am Mo., 4. März 2019 um 16:59 Uhr schrieb Raghavendra Gowdappa > : > > > > Were you seeing high Io-wait when you captured the top output? I guess > not as you mentione

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-28 Thread Raghavendra Gowdappa
On Thu, Mar 28, 2019 at 2:37 PM Xavi Hernandez wrote: > On Thu, Mar 28, 2019 at 3:05 AM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar 27, 2019 at 2:20 PM Pranith

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
7, 2019 at 1:13 PM Pranith Kumar Karampuri < >>> pkara...@redhat.com> wrote: >>> >>>> >>>> >>>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez >>>> wrote: >>>> >>>>> On Wed, Mar 27, 2019 at 1

Re: [Gluster-users] Transport endpoint is not connected failures in

2019-03-27 Thread Raghavendra Gowdappa
On Wed, Mar 27, 2019 at 9:46 PM wrote: > Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the > “Transport endpoint is not connected failures” for us. > What was the version you saw failures in? Were there any logs matching with the pattern

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa wrote: > > > On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez > wrote: > >> Hi Raghavendra, >> >> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa >> wrote: >> >>> All, >>> &g

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez wrote: > Hi Raghavendra, > > On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa > wrote: > >> All, >> >> Glusterfs cleans up POSIX locks held on an fd when the client/mount >> through which those locks

[Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-26 Thread Raghavendra Gowdappa
All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from bricks/server. This helps Glusterfs to not run into a stale lock problem later (For eg., if application unlocks while the connection was still down). However, this means

Re: [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-21 Thread Raghavendra Gowdappa
gluster server. > > So, following your suggestions, I executed, on s04 node, the top command. > In attachment, you can find the related output. > top output doesn't contain cmd/thread names. Was there anything wrong. > Thank you very much for your help. > Regards, > Mauro

Re: [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-14 Thread Raghavendra Gowdappa
oon as the activity load will be high. > Thank you, > Mauro > > On 14 Mar 2019, at 04:57, Raghavendra Gowdappa > wrote: > > > > On Wed, Mar 13, 2019 at 3:55 PM Mauro Tridici > wrote: > >> Hi Raghavendra, >> >> Yes, server.event-thread has been

Re: [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-13 Thread Raghavendra Gowdappa
x6b) [0x55ef0101632b] ) 0-: > received signum (15), shutting down > > *CRITICALS:* > *CWD: /var/log/glusterfs * > *COMMAND: grep " C " *.log |grep "2019-03-13 06:"* > > no critical errors at 06:xx > only one critical error during the day > > *[root@s06

Re: [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-11 Thread Raghavendra Gowdappa
f “s06" gluster server. > > I noticed a lot of intermittent warning and error messages. > > Thank you in advance, > Mauro > > > > On 4 Mar 2019, at 18:45, Raghavendra Gowdappa wrote: > > > +Gluster Devel , +Gluster-users > > > I would like to point

Re: [Gluster-users] Experiences with FUSE in real world - Presentationat Vault 2019

2019-03-07 Thread Raghavendra Gowdappa
ards, > Strahil Nikolov > On Mar 7, 2019 08:54, Raghavendra Gowdappa wrote: > > Unfortunately, there is no recording. However, we are willing to discuss > our findings if you've specific questions. We can do that in this thread. > > On Thu, Mar 7, 2019 at 10:33

Re: [Gluster-users] Release 6: Release date update

2019-03-07 Thread Raghavendra Gowdappa
I just found a fix for https://bugzilla.redhat.com/show_bug.cgi?id=1674412. Since its a deadlock I am wondering whether this should be in 6.0. What do you think? On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan wrote: > Hi, > > Release-6 was to be an early March release, and due to finding

Re: [Gluster-users] Experiences with FUSE in real world - Presentationat Vault 2019

2019-03-06 Thread Raghavendra Gowdappa
gt; On Mar 5, 2019 11:13, Raghavendra Gowdappa wrote: > > All, > > Recently me, Manoj and Csaba presented on positives and negatives of > implementing File systems in userspace using FUSE [1]. We had based the > talk on our experiences with Glusterfs having FUSE as the nat

[Gluster-users] Experiences with FUSE in real world - Presentation at Vault 2019

2019-03-05 Thread Raghavendra Gowdappa
All, Recently me, Manoj and Csaba presented on positives and negatives of implementing File systems in userspace using FUSE [1]. We had based the talk on our experiences with Glusterfs having FUSE as the native interface. The slides can also be found at [1]. [1]

Re: [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-04 Thread Raghavendra Gowdappa
change. > > Regards, > Mauro > > On 4 Mar 2019, at 16:55, Raghavendra Gowdappa wrote: > > > > On Mon, Mar 4, 2019 at 8:54 PM Mauro Tridici > wrote: > >> Hi Raghavendra, >> >> thank you for your reply. >> Yes, you are right. It is a problem

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Raghavendra Gowdappa
On Mon, Mar 4, 2019 at 7:47 PM Raghavendra Gowdappa wrote: > > > On Mon, Mar 4, 2019 at 4:26 PM Hu Bert wrote: > >> Hi Raghavendra, >> >> at the moment iowait and cpu consumption is quite low, the main >> problems appear during the weekend (high traffic, es

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Raghavendra Gowdappa
please collect output of following command and send back the collected data? # top -bHd 3 > top.output > > Hubert > > Am Mo., 4. März 2019 um 11:31 Uhr schrieb Raghavendra Gowdappa > : > > > > what is the per thread CPU usage like on these clients? With highly > concurr

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Raghavendra Gowdappa
what is the per thread CPU usage like on these clients? With highly concurrent workloads we've seen single thread that reads requests from /dev/fuse (fuse reader thread) becoming bottleneck. Would like to know what is the cpu usage of this thread looks like (you can use top -H). On Mon, Mar 4,

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote: > > > On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa >> wrote: >> >>> All, >>> >

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead > One thing we are still figuring out is whether

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
turn on for a class of applications or problems. Or are you just talking about the standard group settings for virt as a > custom profile? > > On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa > wrote: > > https://review.gluster.org/22203 > > On Tue, Feb 12, 2019 at 5:38 PM Raghavend

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
https://review.gluster.org/22203 On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead and at worst io-cac

[Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
All, We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-11 Thread Raghavendra Gowdappa
n, Feb 11, 2019, 7:19 PM Raghavendra Gowdappa > wrote: > >> >> >> On Mon, Feb 11, 2019 at 3:49 PM João Baúto < >> joao.ba...@neuro.fchampalimaud.org> wrote: >> >>> Although I don't have these error messages, I'm having fuse crashes as >&g

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-11 Thread Raghavendra Gowdappa
; Artem >> >> -- >> Founder, Android Police <http://www.androidpolice.com>, APK Mirror >> <http://www.apkmirror.com/>, Illogical Robot LLC >> beerpla.net | +ArtemRussakovskii >> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR >> <http://twit

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-11 Thread Raghavendra Gowdappa
APK Mirror >> <http://www.apkmirror.com/>, Illogical Robot LLC >> beerpla.net | +ArtemRussakovskii >> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR >> <http://twitter.com/ArtemR> >> >> >> On Fri, Feb 8, 2019 at 7:22 PM Raghavendra

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-08 Thread Raghavendra Gowdappa
>>>>> Hi Artem, >>>>>>>>>> >>>>>>>>>> Opened https://bugzilla.redhat.com/show_bug.cgi?id=1671603 (ie, >>>>>>>>>> as a clone of other bugs where recent discussions happened), and >>

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-08 Thread Raghavendra Gowdappa
On Fri, Feb 8, 2019 at 8:50 AM Raghavendra Gowdappa wrote: > > > On Fri, Feb 8, 2019 at 8:48 AM Raghavendra Gowdappa > wrote: > >> One possible reason could be >> https://review.gluster.org/r/18b6d7ce7d490e807815270918a17a4b392a829d >> > > https://r

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-07 Thread Raghavendra Gowdappa
On Fri, Feb 8, 2019 at 8:48 AM Raghavendra Gowdappa wrote: > One possible reason could be > https://review.gluster.org/r/18b6d7ce7d490e807815270918a17a4b392a829d > https://review.gluster.org/#/c/glusterfs/+/19997/ as that changed some code in epoll handler. Though the change i

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-07 Thread Raghavendra Gowdappa
any particular pattern you observed before the crash. >>>>>>>>> >>>>>>>>> -Amar >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Jan 31, 2019 at 11:40 PM Artem Russakovskii <

Re: [Gluster-users] 0-epoll: Failed to dispatch handler

2019-02-04 Thread Raghavendra Gowdappa
On Mon, Feb 4, 2019 at 8:18 PM Dieter Molketin < dieter.molke...@deutsche-telefon.de> wrote: > After upgrade from glusterfs 3.12 to version 5.3 I see following error > message in all logfiles multiple times: > > [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to > dispatch handler

[Gluster-users] Memory management, OOM kills and glusterfs

2019-02-04 Thread Raghavendra Gowdappa
All, Me, Csaba and Manoj are presenting our experiences with using FUSE as an interface for Glusterfs at Vault'19 [1]. One of the areas Glusterfs has faced difficulties is with memory management. One of the reasons for high memory consumption has been the amount of memory consumed by glusterfs

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-01-30 Thread Raghavendra Gowdappa
On Thu, Jan 31, 2019 at 2:14 AM Artem Russakovskii wrote: > Also, not sure if related or not, but I got a ton of these "Failed to > dispatch handler" in my logs as well. Many people have been commenting > about this issue here https://bugzilla.redhat.com/show_bug.cgi?id=1651246. >

Re: [Gluster-users] query about glusterfs 3.12-3 write-behind.c coredump

2019-01-29 Thread Raghavendra Gowdappa
On Wed, Jan 30, 2019 at 9:44 AM Raghavendra Gowdappa wrote: > > > On Wed, Jan 30, 2019 at 7:35 AM Li, Deqian (NSB - CN/Hangzhou) < > deqian...@nokia-sbell.com> wrote: > >> Hi, >> >> >> >> Could you help to check this coredump? >> >

Re: [Gluster-users] query about glusterfs 3.12-3 write-behind.c coredump

2019-01-29 Thread Raghavendra Gowdappa
On Wed, Jan 30, 2019 at 7:35 AM Li, Deqian (NSB - CN/Hangzhou) < deqian...@nokia-sbell.com> wrote: > Hi, > > > > Could you help to check this coredump? > > We are using glusterfs 3.12-3(3 replicated bricks solution ) to do > stability testing under high CPU load like 80% by stress and doing I/O.

Re: [Gluster-users] query about glusterfs 3.12-3 write-behind.c coredump

2019-01-29 Thread Raghavendra Gowdappa
On Wed, Jan 30, 2019 at 7:35 AM Li, Deqian (NSB - CN/Hangzhou) < deqian...@nokia-sbell.com> wrote: > Hi, > > > > Could you help to check this coredump? > > We are using glusterfs 3.12-3(3 replicated bricks solution ) to do > stability testing under high CPU load like 80% by stress and doing I/O.

Re: [Gluster-users] query about glusterd epoll thread get stuck

2019-01-29 Thread Raghavendra Gowdappa
On Tue, Jan 29, 2019 at 12:11 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi, > > We are using glusterfs version 3.12 for 3 brick I find that occasionally > after reboot all 3 sn nodes simultaneously, the glusterd process on one sn > nodes may get stuck, when you

Re: [Gluster-users] writev: Transport endpoint is not connected

2019-01-22 Thread Raghavendra Gowdappa
On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira wrote: > Dear all, > > I've been trying to benchmark a gluster file system using the MPIIO API of > IOR. Almost all of the times I try to run the application with more than 6 > tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:

Re: [Gluster-users] A broken file that can not be deleted

2019-01-10 Thread Raghavendra Gowdappa
On Wed, Jan 9, 2019 at 7:48 PM Dmitry Isakbayev wrote: > I am seeing a broken file that exists on 2 out of 3 nodes. > Wondering whether its a case of split brain. > The application trying to use the file throws file permissions error. ls, > rm, mv, touch all throw "Input/output error" > > $

Re: [Gluster-users] Input/output error on FUSE log

2019-01-05 Thread Raghavendra Gowdappa
On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa wrote: > > > On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack wrote: > >> Hi all, >> >> >> I'm having a problem writing to our volume. When writing files larger >> than about 2GB, I get an intermittent iss

Re: [Gluster-users] Input/output error on FUSE log

2019-01-05 Thread Raghavendra Gowdappa
On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack wrote: > Hi all, > > > I'm having a problem writing to our volume. When writing files larger > than about 2GB, I get an intermittent issue where the write will fail and > return Input/Output error. This is also shown in the FUSE log of the > client

Re: [Gluster-users] java application crushes while reading a zip file

2019-01-02 Thread Raghavendra Gowdappa
PM Dmitry Isakbayev > wrote: > >> The software ran with all of the options turned off over the weekend >> without any problems. >> I will try to collect the debug info for you. I have re-enabled the 3 >> three options, but yet to see the problem reoccurring. >> >

Re: [Gluster-users] On making ctime generator enabled by default in stack

2019-01-02 Thread Raghavendra Gowdappa
On Mon, Nov 12, 2018 at 10:48 AM Amar Tumballi wrote: > > > On Mon, Nov 12, 2018 at 10:39 AM Vijay Bellur wrote: > >> >> >> On Sun, Nov 11, 2018 at 8:25 PM Raghavendra Gowdappa >> wrote: >> >>> >>> >>> On Sun, Nov 11, 2018

[Gluster-users] [DHT] serialized readdir(p) across subvols and effect on performance

2018-12-31 Thread Raghavendra Gowdappa
All, As many of us are aware, readdir(p)s are serialized across DHT subvols. One of the intuitive first reactions for this algorithm is that readdir(p) is going to be slow. However this is partly true as reading the contents of a directory is normally split into multiple readdir(p) calls and

Re: [Gluster-users] java application crushes while reading a zip file

2018-12-29 Thread Raghavendra Gowdappa
043a40e7c7651967bd9 >> Virtualization: kvm >> Operating System: CentOS Linux 7 (Core) >>CPE OS Name: cpe:/o:centos:centos:7 >> Kernel: Linux 3.10.0-862.3.2.el7.x86_64 >> Architecture: x86-64 >> >> >> >> >> On F

Re: [Gluster-users] java application crushes while reading a zip file

2018-12-28 Thread Raghavendra Gowdappa
o away. Instead if you think performance is agreeable to you, please keep these xlators off in production. > On Thu, Dec 27, 2018 at 10:55 PM Raghavendra Gowdappa > wrote: > >> >> >> On Fri, Dec 28, 2018 at 3:13 AM Dmitry Isakbayev >> wrote: >> >>>

Re: [Gluster-users] java application crushes while reading a zip file

2018-12-27 Thread Raghavendra Gowdappa
ee the errors is it possible to collect, * strace of the java application (strace -ff -v ...) * fuse-dump of the glusterfs mount (use option --dump-fuse while mounting)? I also need another favour from you. By trail and error, can you point out which of the many performance xlators you've turned

Re: [Gluster-users] java application crushes while reading a zip file

2018-12-27 Thread Raghavendra Gowdappa
What version of glusterfs are you using? It might be either * a stale metadata issue. * inconsistent ctime issue. Can you try turning off all performance xlators? If the issue is 1, that should help. On Fri, Dec 28, 2018 at 1:51 AM Dmitry Isakbayev wrote: > Attempted to set

Re: [Gluster-users] Invisible files

2018-12-14 Thread Raghavendra Gowdappa
On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira wrote: > It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a > benchmark with IOR: the volume would seem normally mounted, but I was > unable to overwrite files, and ls would show the volume as totally empty. > I could write new files

Re: [Gluster-users] Failed to get fd context for a non-anonymous fd

2018-12-13 Thread Raghavendra Gowdappa
On Fri, Dec 14, 2018 at 11:49 AM wrote: > Hi, > > > > We updated our glusterfs servers and clients from 3.10 to 4.1.6. Although > all seems to be working just fine we see errors like these on almost all > glusterfs servers: > > > > [2018-12-14 02:35:03.524378] I

Re: [Gluster-users] Directory selfheal failed: Unable to form layout for directory on 4.1.5 fuse client

2018-11-13 Thread Raghavendra Gowdappa
On Wed, Nov 14, 2018 at 3:04 AM mabi wrote: > Hi, > > I just wanted to report that since I upgraded my GluterFS client from > 3.12.14 to 4.1.5 on a Debian 9 client which uses FUSE mount I see a lot of > these entries for many different directories in the mnt log file on the > client: > >

Re: [Gluster-users] duplicate performance.cache-size with different values

2018-11-12 Thread Raghavendra Gowdappa
On Mon, Nov 12, 2018 at 9:36 PM Davide Obbi wrote: > Hi, > > i have noticed that this option is repeated twice with different values in > gluster 4.1.5 if you run gluster volume get volname all > > performance.cache-size 32MB > ... > performance.cache-size 128MB

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Raghavendra Gowdappa
On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: >> >>> >>> >>> On Mon, No

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa > wrote: > >> All, >> >> There is a patch [1] from Kotresh, which makes ctime generator as default >> in stack. Currently ctime generator is being

[Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
All, There is a patch [1] from Kotresh, which makes ctime generator as default in stack. Currently ctime generator is being recommended only for usecases where ctime is important (like for Elasticsearch). However, a reliable (c)(m)time can fix many consistency issues within glusterfs stack too.

Re: [Gluster-users] [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
+Gluster-users On Mon, Oct 8, 2018 at 9:34 AM Raghavendra Gowdappa wrote: > > > On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa > wrote: > >> >> >> - Original Message - >> > From: "Pranith Kumar Karampuri" >> > To:

Re: [Gluster-users] High CPU usage with 3 bricks

2018-10-04 Thread Raghavendra Gowdappa
On Fri, Oct 5, 2018 at 1:54 AM Tyler Salwierz wrote: > Hi, > > I have a server setup on a powerful processor (i7-8700). I have three > bricks setup on localhost - each their own hard drive. When I'm downloading > content at 100MB/s I see upwards of 180% CPU usage from GlusterFS. > > Uploading

Re: [Gluster-users] sharding in glusterfs

2018-10-02 Thread Raghavendra Gowdappa
On Sun, Sep 30, 2018 at 9:54 PM Ashayam Gupta wrote: > Hi Pranith, > > Thanks for you reply, it would be helpful if you can please help us with > the following issues with respect to sharding. > The gluster version we are using is *glusterfs 4.1.4 *on Ubuntu 18.04.1 > LTS > > >-

[Gluster-users] Update of work on fixing POSIX compliance issues in Glusterfs

2018-10-01 Thread Raghavendra Gowdappa
All, There have been issues related to POSIX compliance especially while running Database workloads on Glusterfs. Recently we've worked on fixing some of them. This mail is an update on that effort. The issues themselves can be classfied into

Re: [Gluster-users] Data on gluster volume gone

2018-09-19 Thread Raghavendra Gowdappa
On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa wrote: > Can you give volume info? Looks like you are using 2 way replica. > Yes indeed. gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0 gfs02:/glusterdata/brick2/gvol0 +Pranith. +Ravi. Not sure whether

Re: [Gluster-users] Data on gluster volume gone

2018-09-19 Thread Raghavendra Gowdappa
Can you give volume info? Looks like you are using 2 way replica. On Wed, Sep 19, 2018 at 9:39 AM, Johan Karlsson wrote: > I have two servers setup with glusterFS in replica mode, a single volume > exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS > > After a package upgrade +

Re: [Gluster-users] mv lost some files ?

2018-09-05 Thread Raghavendra Gowdappa
# file: data4/bricks/project2/371_37829/test-dir > trusted.gfid=0x57e0a8945a224ab4be2c6c71eada6217 > trusted.glusterfs.dht=0xdbde0bb124924924279e79e6 > trusted.glusterfs.quota.dirty=0x3000 > trusted.glusterfs.quota.f8c0ee37-c980-4162-b7ed-15f911d084

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
volume have about 25T data. > > Best Regards. > > Raghavendra Gowdappa 于2018年9月5日周三 上午10:50写道: > >> >> >> On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: >> >>> Hi all: >>> >>> I have a replicated volume project2 with info: >>>

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
Forgot to ask, what version of Glusterfs are you using? On Wed, Sep 5, 2018 at 8:20 AM, Raghavendra Gowdappa wrote: > > > On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: > >> Hi all: >> >> I have a replicated volume project2 with info: >> Volume Name: project

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: > Hi all: > > I have a replicated volume project2 with info: > Volume Name: project2 Type: Distributed-Replicate Volume ID: > 60175b8e-de0e-4409-81ae-7bb5eb5cacbf Status: Started Snapshot Count: 0 > Number of Bricks: 84 x 2 = 168 Transport-type: tcp

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-31 Thread Raghavendra Gowdappa
ing the option Raghavendra mentioned, you ll have to execute it >> explicitly, as it's not part of group option yet: >> >> #gluster vol set VOLNAME performance.nl-cache-positive-entry on >> >> Also from the past experience, setting the below option has helped in >>

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-31 Thread Raghavendra Gowdappa
On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck wrote: > On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote: > > +Mohit. +Milind > > > > @Mohit/Milind, > > > > Can you check logs and see whether you can find anything relevant? > > From glances at the

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-30 Thread Raghavendra Gowdappa
disconnect msgs without any reason. That normally points to reason for disconnect in the network rather than a Glusterfs initiated disconnect. > Cheers > Richard > > On 08/30/2018 02:40 PM, Raghavendra Gowdappa wrote: > > Normally client logs will give a clue on why the disconne

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-30 Thread Raghavendra Gowdappa
, Aug 30, 2018 at 5:00 PM Raghavendra Gowdappa > wrote: > >> >> >> On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi >> wrote: >> >>> Thanks Amar, >>> >>> i have enabled the negative lookups cache on the volume: >>> >> I thin

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-30 Thread Raghavendra Gowdappa
On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi wrote: > Thanks Amar, > > i have enabled the negative lookups cache on the volume: > > To deflate a tar archive (not compressed) of 1.3GB it takes aprox 9mins > which can be considered a slight improvement from the previous 12-15 > however still not

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-30 Thread Raghavendra Gowdappa
Normally client logs will give a clue on why the disconnections are happening (ping-timeout, wrong port etc). Can you look into client logs to figure out what's happening? If you can't find anything, can you send across client logs? On Wed, Aug 29, 2018 at 6:11 PM, Richard Neuboeck wrote: > Hi

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-14 Thread Raghavendra Gowdappa
00:05:15 /usr/sbin/glusterfs >>> --volfile-server=gfs1a --volfile-id=myvol-private /mnt/myvol-private >>> >>> Then I ran the following command >>> >>> sudo kill -USR1 456 >>> >>> but now I can't find where the files are stored. Are these supposed to >>&g

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 6:47 PM, mabi wrote: > Hi Nithya, > > Thanks for the fast answer. Here the additional info: > > 1. gluster volume info > > Volume Name: myvol-private > Type: Replicate > Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 > Status: Started > Snapshot Count: 0 > Number of

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
5, 2018, at 02:55, Raghavendra Gowdappa > wrote: > > > > On Sun, Aug 5, 2018 at 1:22 PM, Yuhao Zhang wrote: > >> This is a semi-production server and I can't bring it down right now. >> Will try to get the monitoring output when I get a chance. >> > >

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
PU processes are brick daemons (glusterfsd) and > htop showed they were in status D. However, I saw zero zpool IO as clients > were all hanging. > > > On Aug 5, 2018, at 02:38, Raghavendra Gowdappa > wrote: > > > > On Sun, Aug 5, 2018 at 12:44 PM, Yuhao Zhang wrote: &g

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
On Sun, Aug 5, 2018 at 12:44 PM, Yuhao Zhang wrote: > Hi, > > I am running into a situation that heavy write causes Gluster server went > into zombie with many high CPU processes and all clients hangs, it is > almost 100% reproducible on my machine. Hope someone can help. > Can you give us the

Re: [Gluster-users] gluster performance and new implementation

2018-07-23 Thread Raghavendra Gowdappa
I doubt whether it will make a big difference, but you can turn on performance.flush-behind. On Mon, Jul 23, 2018 at 4:51 PM, Γιώργος Βασιλόπουλος wrote: > Hello > > I have set up an expirimental gluster replica 3 arbiter 1 volume for ovirt. > > Network between gluster servers is 2x1G (mode4

Re: [Gluster-users] Slow write times to gluster disk

2018-07-14 Thread Raghavendra Gowdappa
On 6/30/18, Raghavendra Gowdappa wrote: > On Fri, Jun 29, 2018 at 10:38 PM, Pat Haley wrote: > >> >> Hi Raghavendra, >> >> We ran the tests (write tests) and I copied the log files for both the >> server and the client to http://mseas.mit.edu/downloa

Re: [Gluster-users] Slow write times to gluster disk

2018-07-13 Thread Raghavendra Gowdappa
head doesn't flush its cache due to fstats making this behavior optional. You can try this patch and let us know about results. Will let you know when patch is ready. > Thanks > > Pat > > > > On 07/06/2018 01:27 AM, Raghavendra Gowdappa wrote: > > > > On Fri, J

Re: [Gluster-users] Slow write times to gluster disk

2018-07-05 Thread Raghavendra Gowdappa
take me another 2 days to work on this issue again. So, most likely you'll have an update on this next week. > Thanks > > Pat > > > > On 06/29/2018 11:25 PM, Raghavendra Gowdappa wrote: > > > > On Fri, Jun 29, 2018 at 10:38 PM, Pat Haley wrote: > >> &

Re: [Gluster-users] Slow write times to gluster disk

2018-06-29 Thread Raghavendra Gowdappa
t; performance.readdir-ahead: on > nfs.disable: on > nfs.export-volumes: off > [root@mseas-data2 ~]# > > > On 06/29/2018 12:28 PM, Raghavendra Gowdappa wrote: > > > > On Fri, Jun 29, 2018 at 8:24 PM, Pat Haley wrote: > >> >> Hi Raghavendra, >> >>

Re: [Gluster-users] Slow write times to gluster disk

2018-06-29 Thread Raghavendra Gowdappa
me set diagnostics.client-log-level TRACE Also are you sure that open-behind was turned off? Can you give the output of, # gluster volume info > Thanks > > Pat > > > > > On 06/25/2018 09:39 PM, Raghavendra Gowdappa wrote: > > > > On Tue, Jun 26, 2018 a

Re: [Gluster-users] Slow write times to gluster disk

2018-06-25 Thread Raghavendra Gowdappa
manually using gluster cli. Following are the options and their values: performance.md-cache-timeout=600 network.inode-lru-limit=50000 > Thanks > > Pat > > > > > On 06/22/2018 07:51 AM, Raghavendra Gowdappa wrote: > > > > On Thu, Jun 21, 2018 at 8:41 PM, Pat Haley wro

Re: [Gluster-users] Slow write times to gluster disk

2018-06-22 Thread Raghavendra Gowdappa
t; directly impact the performance or is it to help collect data? If the > latter, where will the data be located? > It impacts performance. > Thanks again. > > Pat > > > > On 06/21/2018 01:01 AM, Raghavendra Gowdappa wrote: > > > > On Thu, Jun

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cached in write-behind would invalidate > metadata cache

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
Please note that these suggestions are for native fuse mount. On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cac

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
off performance.write-behind. @Pat, Can you set, # gluster volume set performance.write-behind off and redo the tests writing to glusterfs mount? Let us know about the results you see. regards, Raghavendra On Thu, Jun 21, 2018 at 8:33 AM, Raghavendra Gowdappa wrote: > > > On Th

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa wrote: > For the case of reading from Glusterfs mount, read-ahead should help. > However, we've known issues with read-ahead[1][2]. To work around these, > can you try with, > > 1. Turn off performance.open-behind > #

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
For the case of reading from Glusterfs mount, read-ahead should help. However, we've known issues with read-ahead[1][2]. To work around these, can you try with, 1. Turn off performance.open-behind #gluster volume set performance.open-behind off 2. enable group meta metadata-cache # gluster

  1   2   >