Re: [Gluster-users] Write-behind breaks Mercurial

2011-06-10 Thread Raghavendra Gowdappa
Hi Simon, This issue is a variant of Bug 801 (http://bugs.gluster.com/show_bug.cgi?id=801).http://bugs.gluster.com/show_bug.cgi?id=801%29. Mercurial is accessing file /uwsgi/.hg/store/00changelog.i.a using two fds. Based on default policy of glusterfs whether to use direct-io-mode or

Re: [Gluster-users] 3.3 Quota Problems

2012-09-17 Thread Raghavendra Gowdappa
Hi Ling, A bug has been filed at https://bugzilla.redhat.com/show_bug.cgi?id=857874 Thanks for reporting. regards, Raghavendra. - Original Message - From: Ling Ho l...@slac.stanford.edu To: Gluster-users@gluster.org Sent: Wednesday, September 12, 2012 1:34:34 AM Subject: Re:

Re: [Gluster-users] Low (0.2ms) latency reads, is it possible at all?

2013-04-23 Thread Raghavendra Gowdappa
Hi willem, Please find the inlined comments: - Original Message - From: Willem gwil...@gmail.com To: gluster-users@gluster.org Sent: Thursday, April 18, 2013 11:58:46 PM Subject: [Gluster-users] Low (0.2ms) latency reads, is it possible at all? I'm testing GlusterFS viability for

Re: [Gluster-users] Chaning position of md-cache in xlator graph

2014-10-21 Thread Raghavendra Gowdappa
Adding correct gluster-devel mail id. - Original Message - From: Raghavendra Gowdappa rgowd...@redhat.com To: gluster-devel gluster-de...@nongnu.org Sent: Tuesday, 21 October, 2014 3:26:21 PM Subject: Chaning position of md-cache in xlator graph Hi all, The context is bz 1138970

Re: [Gluster-users] [Gluster-devel] cannot delete non-empty directory

2015-02-09 Thread Raghavendra Gowdappa
- Original Message - From: David F. Robinson david.robin...@corvidtec.com To: Shyam srang...@redhat.com, Gluster Devel gluster-de...@gluster.org, gluster-users@gluster.org, Susant Palai spa...@redhat.com Sent: Monday, February 9, 2015 10:55:44 PM Subject: Re: [Gluster-devel]

[Gluster-users] [posix-compliance] unlink and access to file through open fd

2015-09-04 Thread Raghavendra Gowdappa
All, Posix allows access to file through open fds even if name associated with file is deleted. While this works for glusterfs for most of the cases, there are some corner cases where we fail. 1. Reboot of brick: === With the reboot of brick, fd is lost. unlink would've

Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Raghavendra Gowdappa
Came across a glibc bug which could've caused some corruptions. On googling about possible problems, we found that there is an issue (https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in glibc-2.17-121.el7. From the bug we found the following test-script to determine if the glibc is

Re: [Gluster-users] Fedora upgrade to f24 installed 3.8.0 client and broke mounting

2016-06-27 Thread Raghavendra Gowdappa
- Original Message - > From: "Avra Sengupta" <aseng...@redhat.com> > To: "Vijay Bellur" <vbel...@redhat.com>, "Alastair Neil" > <ajneil.t...@gmail.com>, "gluster-users" > <gluster-users@gluster.org>, "

Re: [Gluster-users] [Gluster-devel] Why rdma.so is missing often

2016-08-07 Thread Raghavendra Gowdappa
- Original Message - > From: "jayakrishnan mm" > To: "Gluster Devel" , gluster-users@gluster.org > Sent: Monday, July 25, 2016 3:27:42 PM > Subject: [Gluster-devel] Why rdma.so is missing often > > Gluster version 3.7.6 > I get

Re: [Gluster-users] Reconnecting Client to Brick

2016-08-08 Thread Raghavendra Gowdappa
- Original Message - > From: "Danny Lee" > To: gluster-users@gluster.org > Sent: Thursday, August 4, 2016 1:25:43 AM > Subject: [Gluster-users] Reconnecting Client to Brick > > Hi, > > I have a 3-node replicated cluster using the native glusterfs mount, and >

Re: [Gluster-users] Turning off readdirp in the entire stack on fuse mount

2017-03-01 Thread Raghavendra Gowdappa
- Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" > <gluster-users@gluster.org> > Sent: Thursday, March 2, 2017 9:48:08 AM > Subject:

Re: [Gluster-users] incorrect usage value on a directory

2016-09-15 Thread Raghavendra Gowdappa
Hi Sergei, You can set marker "dirty" xattr using key trusted.glusterfs.quota.dirty. You have two choices: 1. Setting through a gluster mount. This will set key on _all_ bricks. [root@unused personal]# gluster volume info No volumes present [root@unused personal]# rm -rf /home/export/ptop-1 &&

Re: [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-07 Thread Raghavendra Gowdappa
- Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" > <gluster-users@gluster.org> > Sent: Tuesday, November 8, 2016 10:37:56 AM > Sub

[Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-07 Thread Raghavendra Gowdappa
Hi all, We have an option in called "cluster.readdir-optimize" which alters the behavior of readdirp in DHT. This value affects how storage/posix treats dentries corresponding to directories (not for files). When this value is on, * DHT asks only one subvol/brick to return dentries

Re: [Gluster-users] [Gluster-devel] A question of GlusterFS dentries!

2016-11-01 Thread Raghavendra Gowdappa
- Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Keiviw" <kei...@163.com> > Cc: gluster-de...@gluster.org, "gluster-users" <gluster-users@gluster.org> > Sent: Wednesday, November 2, 2016 9:38:

Re: [Gluster-users] [Gluster-devel] A question of GlusterFS dentries!

2016-11-01 Thread Raghavendra Gowdappa
- Original Message - > From: "Keiviw" > To: gluster-de...@gluster.org > Sent: Tuesday, November 1, 2016 12:41:02 PM > Subject: [Gluster-devel] A question of GlusterFS dentries! > > Hi, > In GlusterFS distributed volumes, listing a non-empty directory was slow. > Then I

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-10-13 Thread Raghavendra Gowdappa
- Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Joe Julian" <j...@julianfamily.org> > Cc: "Raghavendra G" <raghaven...@gluster.com>, "Pranith Kumar Karampuri" > <pkara...@redhat

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-10-13 Thread Raghavendra Gowdappa
I've a patch [1], which I think fixes the problem in write-behind. If possible can anyone of you please let me know whether it fixes the issue (with client.io-threads turned on)? [1] http://review.gluster.org/15579 - Original Message - > From: "Joe Julian" > To:

[Gluster-users] Selectively setting loglevel for logs from an xlator

2017-01-12 Thread Raghavendra Gowdappa
All, Not sure many of us know about how to selectively set log-level of an xlator. Thought this might be helpful to someone. To selectively set log-level of an xlator we need to do setfattr on any path in glusterfs with "trusted.glusterfs.%s.set-log-level" (where %s is name of xlator in

[Gluster-users] How commonly applications make use of fadvise?

2017-08-10 Thread Raghavendra Gowdappa
Hi all, In a conversation between me, Milind and Csaba, Milind pointed out fadvise(2) [1] and its potential benefits to Glusterfs' caching translators like read-ahead etc. After discussing about it, we agreed that our performance translators can leverage the hints to provide better

Re: [Gluster-users] Rebalance without changing layout

2017-06-21 Thread Raghavendra Gowdappa
- Original Message - > From: "Tahereh Fattahi" > To: gluster-users@gluster.org > Sent: Friday, May 19, 2017 12:21:53 AM > Subject: [Gluster-users] Rebalance without changing layout > > Hi > Is it possible to rebalance data in gluster without changing layout? >

[Gluster-users] [DHT] The myth of two hops for linkto file resolution

2017-04-29 Thread Raghavendra Gowdappa
All, Its a common perception that the resolution of a file having linkto file on the hashed-subvol requires two hops: 1. client to hashed-subvol. 2. client to the subvol where file actually resides. While it is true that a fresh lookup behaves this way, the other fact that get's ignored is

Re: [Gluster-users] [DHT] The myth of two hops for linkto file resolution

2017-05-04 Thread Raghavendra Gowdappa
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" > <gluster-

Re: [Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)

2017-09-18 Thread Raghavendra Gowdappa
- Original Message - > From: "Sam McLeod" > To: gluster-users@gluster.org > Sent: Friday, September 15, 2017 6:42:13 AM > Subject: [Gluster-users] 0-client_t: null client [Invalid argument] & high > CPU usage (Gluster 3.12) > > Howdy, > > I'm setting up

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-17 Thread Raghavendra Gowdappa
On Mon, Jun 18, 2018 at 8:11 AM, Raghavendra Gowdappa wrote: > From the bt: > > #8 0x7f6ef977e6de in rda_readdirp (frame=0x7f6eec862320, > this=0x7f6ef4019f20, fd=0x7f6ed40077b0, size=357, off=2, > xdata=0x7f6eec0085a0) at readdir-ahead.c:266 > #9 0x7f6ef952db4c i

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-17 Thread Raghavendra Gowdappa
On Mon, Jun 18, 2018 at 8:11 AM, Raghavendra Gowdappa wrote: > From the bt: > > #8 0x7f6ef977e6de in rda_readdirp (frame=0x7f6eec862320, > this=0x7f6ef4019f20, fd=0x7f6ed40077b0, size=357, off=2, > xdata=0x7f6eec0085a0) at readdir-ahead.c:266 > #9 0x7f6ef952db4c i

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-17 Thread Raghavendra Gowdappa
>From the bt: #8 0x7f6ef977e6de in rda_readdirp (frame=0x7f6eec862320, this=0x7f6ef4019f20, fd=0x7f6ed40077b0, size=357, off=2, xdata=0x7f6eec0085a0) at readdir-ahead.c:266 #9 0x7f6ef952db4c in dht_readdirp_cbk (frame=, cookie=0x7f6ef4019f20, this=0x7f6ef40218a0, op_ret=2, op_errno=0,

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-17 Thread Raghavendra Gowdappa
On Mon, Jun 18, 2018 at 9:39 AM, Raghavendra Gowdappa wrote: > > > On Mon, Jun 18, 2018 at 8:11 AM, Raghavendra Gowdappa > wrote: > >> From the bt: >> >> #8 0x7f6ef977e6de in rda_readdirp (frame=0x7f6eec862320, >> this=0x7f6ef4019f20, fd=0x

Re: [Gluster-users] Slow write times to gluster disk

2018-06-29 Thread Raghavendra Gowdappa
me set diagnostics.client-log-level TRACE Also are you sure that open-behind was turned off? Can you give the output of, # gluster volume info > Thanks > > Pat > > > > > On 06/25/2018 09:39 PM, Raghavendra Gowdappa wrote: > > > > On Tue, Jun 26, 2018 a

Re: [Gluster-users] Slow write times to gluster disk

2018-06-29 Thread Raghavendra Gowdappa
t; performance.readdir-ahead: on > nfs.disable: on > nfs.export-volumes: off > [root@mseas-data2 ~]# > > > On 06/29/2018 12:28 PM, Raghavendra Gowdappa wrote: > > > > On Fri, Jun 29, 2018 at 8:24 PM, Pat Haley wrote: > >> >> Hi Raghavendra, >> >>

Re: [Gluster-users] Slow write times to gluster disk

2018-06-25 Thread Raghavendra Gowdappa
manually using gluster cli. Following are the options and their values: performance.md-cache-timeout=600 network.inode-lru-limit=50000 > Thanks > > Pat > > > > > On 06/22/2018 07:51 AM, Raghavendra Gowdappa wrote: > > > > On Thu, Jun 21, 2018 at 8:41 PM, Pat Haley wro

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa wrote: > For the case of reading from Glusterfs mount, read-ahead should help. > However, we've known issues with read-ahead[1][2]. To work around these, > can you try with, > > 1. Turn off performance.open-behind > #

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
Please note that these suggestions are for native fuse mount. On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cac

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
off performance.write-behind. @Pat, Can you set, # gluster volume set performance.write-behind off and redo the tests writing to glusterfs mount? Let us know about the results you see. regards, Raghavendra On Thu, Jun 21, 2018 at 8:33 AM, Raghavendra Gowdappa wrote: > > > On Th

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
For the case of reading from Glusterfs mount, read-ahead should help. However, we've known issues with read-ahead[1][2]. To work around these, can you try with, 1. Turn off performance.open-behind #gluster volume set performance.open-behind off 2. enable group meta metadata-cache # gluster

Re: [Gluster-users] Slow write times to gluster disk

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa wrote: > For the case of writes to glusterfs mount, > > I saw in earlier conversations that there are too many lookups, but small > number of writes. Since writes cached in write-behind would invalidate > metadata cache

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-10 Thread Raghavendra Gowdappa
+gluster-devel - Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Omar Kohl" <omar.k...@iternity.com> > Cc: gluster-users@gluster.org > Sent: Wednesday, January 10, 2018 11:47:31 AM > Subject: Re: [Glust

[Gluster-users] cluster/dht: restrict migration of opened files

2018-01-16 Thread Raghavendra Gowdappa
All, Patch [1] prevents migration of opened files during rebalance operation. If patch [1] affects you, please voice out your concerns. [1] is a stop-gap fix for the problem discussed in issues [2][3] [1] https://review.gluster.org/#/c/19202/ [2] https://github.com/gluster/glusterfs/issues/308

[Gluster-users] Documentation on readdir performance

2018-01-17 Thread Raghavendra Gowdappa
All, A github issue on this (tracking mostly DHT stuff) at: https://github.com/gluster/glusterfs/issues/117 Slides of the talk on the same topic presented at Vault 2017: https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf regards, Raghavendra

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-09 Thread Raghavendra Gowdappa
Sorry about the delayed response. Had to dig into the history to answer various "why"s. - Original Message - > From: "Omar Kohl" > To: gluster-users@gluster.org > Sent: Tuesday, December 26, 2017 6:41:48 PM > Subject: [Gluster-users] Exact purpose of

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-09 Thread Raghavendra Gowdappa
- Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Omar Kohl" <omar.k...@iternity.com> > Cc: gluster-users@gluster.org > Sent: Wednesday, January 10, 2018 10:56:21 AM > Subject: Re: [Gluster-users] Exact p

Re: [Gluster-users] [FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash

2018-01-30 Thread Raghavendra Gowdappa
Note that live-streaming is available at: https://fosdem.org/2018/schedule/streaming/ The talks will be archived too. - Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Gluster Devel" <gluster-de...@gluster.org>, "g

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-28 Thread Raghavendra Gowdappa
- Original Message - > From: "Pranith Kumar Karampuri" > To: "Alan Orth" > Cc: "gluster-users" > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in

Re: [Gluster-users] Run away memory with gluster mount

2018-01-28 Thread Raghavendra Gowdappa
- Original Message - > From: "Nithya Balachandran" > To: "Ravishankar N" > Cc: "Csaba Henk" , "gluster-users" > > Sent: Monday, January 29, 2018 10:49:43 AM > Subject: Re: [Gluster-users] Run

[Gluster-users] [FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash

2018-01-29 Thread Raghavendra Gowdappa
All, Krutika, Manoj and me are presenting a talk during FOSDEM'18 [1]. Please plan to attend. While we are at the event (present on 3rd and 4th), we are happy to chat with you anything related to Glusterfs. The efforts leading to this talk are captured in [2]. [1]

Re: [Gluster-users] Run away memory with gluster mount

2018-01-30 Thread Raghavendra Gowdappa
- Original Message - > From: "Dan Ragle" <dan...@biblestuph.com> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Ravishankar N" > <ravishan...@redhat.com> > Cc: gluster-users@gluster.org, "Csaba Henk" <ch...@r

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-30 Thread Raghavendra Gowdappa
- Original Message - > From: "Alan Orth" <alan.o...@gmail.com> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com> > Cc: "gluster-users" <gluster-users@gluster.org> > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re:

Re: [Gluster-users] Run away memory with gluster mount

2018-02-05 Thread Raghavendra Gowdappa
ch appreciated. Will watch for the next release and retest then. > > Cheers! > > Dan > > > > > On 2 February 2018 at 02:57, Dan Ragle <dan...@biblestuph.com > > <mailto:dan...@biblestuph.com>> wrote: > > > > > > > > On 1/

Re: [Gluster-users] [FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash

2018-02-11 Thread Raghavendra Gowdappa
The talk is up on youtube at: https://www.youtube.com/watch?v=0oQYPKD_kJg regards, Raghavendra On Tue, Jan 30, 2018 at 9:14 PM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > Note that live-streaming is available at: > https://fosdem.org/2018/schedule/streaming/ &

Re: [Gluster-users] [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Raghavendra Gowdappa
; <raghaven...@gluster.com> > Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Gluster Devel" > <gluster-de...@gluster.org> > Sent: Friday, February 9, 2018 1:34:25 PM > Subject: Re: [Gluster-devel] Glusterfs and Structured data >

Re: [Gluster-users] Run away memory with gluster mount

2018-02-05 Thread Raghavendra Gowdappa
I missed your reply :). Sorry about that. - Original Message - > From: "Raghavendra Gowdappa" <rgowd...@redhat.com> > To: "Dan Ragle" <dan...@biblestuph.com> > Cc: "Csaba Henk" <ch...@redhat.com>, "gluster-users" > &l

Re: [Gluster-users] Slow write times to gluster disk

2018-06-22 Thread Raghavendra Gowdappa
t; directly impact the performance or is it to help collect data? If the > latter, where will the data be located? > It impacts performance. > Thanks again. > > Pat > > > > On 06/21/2018 01:01 AM, Raghavendra Gowdappa wrote: > > > > On Thu, Jun

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
On Sun, Aug 5, 2018 at 12:44 PM, Yuhao Zhang wrote: > Hi, > > I am running into a situation that heavy write causes Gluster server went > into zombie with many high CPU processes and all clients hangs, it is > almost 100% reproducible on my machine. Hope someone can help. > Can you give us the

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
PU processes are brick daemons (glusterfsd) and > htop showed they were in status D. However, I saw zero zpool IO as clients > were all hanging. > > > On Aug 5, 2018, at 02:38, Raghavendra Gowdappa > wrote: > > > > On Sun, Aug 5, 2018 at 12:44 PM, Yuhao Zhang wrote: &g

Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-05 Thread Raghavendra Gowdappa
5, 2018, at 02:55, Raghavendra Gowdappa > wrote: > > > > On Sun, Aug 5, 2018 at 1:22 PM, Yuhao Zhang wrote: > >> This is a semi-production server and I can't bring it down right now. >> Will try to get the monitoring output when I get a chance. >> > >

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 6:47 PM, mabi wrote: > Hi Nithya, > > Thanks for the fast answer. Here the additional info: > > 1. gluster volume info > > Volume Name: myvol-private > Type: Replicate > Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 > Status: Started > Snapshot Count: 0 > Number of

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-14 Thread Raghavendra Gowdappa
00:05:15 /usr/sbin/glusterfs >>> --volfile-server=gfs1a --volfile-id=myvol-private /mnt/myvol-private >>> >>> Then I ran the following command >>> >>> sudo kill -USR1 456 >>> >>> but now I can't find where the files are stored. Are these supposed to >>&g

Re: [Gluster-users] Slow write times to gluster disk

2018-07-14 Thread Raghavendra Gowdappa
On 6/30/18, Raghavendra Gowdappa wrote: > On Fri, Jun 29, 2018 at 10:38 PM, Pat Haley wrote: > >> >> Hi Raghavendra, >> >> We ran the tests (write tests) and I copied the log files for both the >> server and the client to http://mseas.mit.edu/downloa

Re: [Gluster-users] Slow write times to gluster disk

2018-07-13 Thread Raghavendra Gowdappa
head doesn't flush its cache due to fstats making this behavior optional. You can try this patch and let us know about results. Will let you know when patch is ready. > Thanks > > Pat > > > > On 07/06/2018 01:27 AM, Raghavendra Gowdappa wrote: > > > > On Fri, J

Re: [Gluster-users] gluster performance and new implementation

2018-07-23 Thread Raghavendra Gowdappa
I doubt whether it will make a big difference, but you can turn on performance.flush-behind. On Mon, Jul 23, 2018 at 4:51 PM, Γιώργος Βασιλόπουλος wrote: > Hello > > I have set up an expirimental gluster replica 3 arbiter 1 volume for ovirt. > > Network between gluster servers is 2x1G (mode4

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-30 Thread Raghavendra Gowdappa
, Aug 30, 2018 at 5:00 PM Raghavendra Gowdappa > wrote: > >> >> >> On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi >> wrote: >> >>> Thanks Amar, >>> >>> i have enabled the negative lookups cache on the volume: >>> >> I thin

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: > Hi all: > > I have a replicated volume project2 with info: > Volume Name: project2 Type: Distributed-Replicate Volume ID: > 60175b8e-de0e-4409-81ae-7bb5eb5cacbf Status: Started Snapshot Count: 0 > Number of Bricks: 84 x 2 = 168 Transport-type: tcp

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
Forgot to ask, what version of Glusterfs are you using? On Wed, Sep 5, 2018 at 8:20 AM, Raghavendra Gowdappa wrote: > > > On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: > >> Hi all: >> >> I have a replicated volume project2 with info: >> Volume Name: project

Re: [Gluster-users] mv lost some files ?

2018-09-04 Thread Raghavendra Gowdappa
volume have about 25T data. > > Best Regards. > > Raghavendra Gowdappa 于2018年9月5日周三 上午10:50写道: > >> >> >> On Tue, Sep 4, 2018 at 5:28 PM, yu sun wrote: >> >>> Hi all: >>> >>> I have a replicated volume project2 with info: >>>

Re: [Gluster-users] mv lost some files ?

2018-09-05 Thread Raghavendra Gowdappa
# file: data4/bricks/project2/371_37829/test-dir > trusted.gfid=0x57e0a8945a224ab4be2c6c71eada6217 > trusted.glusterfs.dht=0xdbde0bb124924924279e79e6 > trusted.glusterfs.quota.dirty=0x3000 > trusted.glusterfs.quota.f8c0ee37-c980-4162-b7ed-15f911d084

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-30 Thread Raghavendra Gowdappa
disconnect msgs without any reason. That normally points to reason for disconnect in the network rather than a Glusterfs initiated disconnect. > Cheers > Richard > > On 08/30/2018 02:40 PM, Raghavendra Gowdappa wrote: > > Normally client logs will give a clue on why the disconne

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-31 Thread Raghavendra Gowdappa
On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck wrote: > On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote: > > +Mohit. +Milind > > > > @Mohit/Milind, > > > > Can you check logs and see whether you can find anything relevant? > > From glances at the

Re: [Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

2018-08-31 Thread Raghavendra Gowdappa
ing the option Raghavendra mentioned, you ll have to execute it >> explicitly, as it's not part of group option yet: >> >> #gluster vol set VOLNAME performance.nl-cache-positive-entry on >> >> Also from the past experience, setting the below option has helped in >>

Re: [Gluster-users] Slow write times to gluster disk

2018-07-05 Thread Raghavendra Gowdappa
take me another 2 days to work on this issue again. So, most likely you'll have an update on this next week. > Thanks > > Pat > > > > On 06/29/2018 11:25 PM, Raghavendra Gowdappa wrote: > > > > On Fri, Jun 29, 2018 at 10:38 PM, Pat Haley wrote: > >> &

Re: [Gluster-users] Intermittent mount disconnect due to socket poller error

2018-02-28 Thread Raghavendra Gowdappa
Is it possible to attach logfiles of problematic client and bricks? On Thu, Mar 1, 2018 at 3:00 AM, Ryan Lee wrote: > We've been on the Gluster 3.7 series for several years with things pretty > stable. Given that it's reached EOL, yesterday I upgraded to 3.13.2. > Every

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-06 Thread Raghavendra Gowdappa
olks significant time. > > For the rest, I'll reply inline below... > > On Mon, Mar 5, 2018 at 10:39 PM, Raghavendra Gowdappa > <rgowd...@redhat.com> wrote: > > +Csaba. > > > > On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <p...@umich.edu> wrote: > >&g

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-06 Thread Raghavendra Gowdappa
On Tue, Mar 6, 2018 at 10:58 PM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > > On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <p...@umich.edu> wrote: > >> Raghavendra, >> >> I've commited my tests case to https://github.com/powool/gluster.gi

Re: [Gluster-users] cluster.readdir-optimize and disappearing files/dirs bug

2018-04-04 Thread Raghavendra Gowdappa
Can you check whether you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1512437? Note that the fix is not backported to 3.13 branch, but is available on 4.0 through https://bugzilla.redhat.com/1512437. On Tue, Apr 3, 2018 at 11:13 PM, Artem Russakovskii wrote: >

Re: [Gluster-users] [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /

2018-04-04 Thread Raghavendra Gowdappa
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii wrote: > Hi, > > I noticed when I run gluster volume heal data info, the follow message > shows up in the log, along with other stuff: > > [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory >> selfheal

Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids

2018-04-05 Thread Raghavendra Gowdappa
> Brick17: 10.0.6.101:/gluster/brick6/brick > Brick18: 10.0.6.102:/gluster/arbrick6/brick (arbiter) > Brick19: 10.0.6.100:/gluster/brick7/brick > Brick20: 10.0.6.101:/gluster/brick7/brick > Brick21: 10.0.6.102:/gluster/arbrick7/brick (arbiter) > Options Reconfigured: > cluster.server

Re: [Gluster-users] Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory

2018-04-16 Thread Raghavendra Gowdappa
On Mon, Apr 16, 2018 at 1:54 PM, Niels Hendriks wrote: > Hi, > > We have a 3-node gluster setup where gluster is both the server and the > client. > Every few days we have some $random file or directory that does not exist > according to the FUSE mountpoint. When we try to

Re: [Gluster-users] RDMA Client Hang Problem

2018-04-25 Thread Raghavendra Gowdappa
Is infiniband itself working fine? You can run tools like ibv_rc_pingpong to find out. On Wed, Apr 25, 2018 at 12:23 PM, Necati E. SISECI wrote: > Dear Gluster-Users, > > I am experiencing RDMA problems. > > I have installed Ubuntu 16.04.4 running with 4.15.0-13-generic

Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids

2018-03-26 Thread Raghavendra Gowdappa
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I

Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids

2018-03-26 Thread Raghavendra Gowdappa
Ian, Do you've a reproducer for this bug? If not a specific one, a general outline of what operations where done on the file will help. regards, Raghavendra On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > > On Mon, Mar 26, 2018 at 12

Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-02 Thread Raghavendra Gowdappa
On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour wrote: > On Mon, 2 Apr 2018, Nithya Balachandran wrote: > > On 2 April 2018 at 14:48, Andreas Davour wrote: >> >> >>> Hi >>> >>> I've found something that works so weird I'm certain I have missed how >>>

Re: [Gluster-users] Invisible files and directories

2018-04-03 Thread Raghavendra Gowdappa
On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko wrote: > Hello! > > We are running distributed volume that contains 7 bricks. > Volume is mounted using native fuse client. > > After an unexpected system reboot, some files are disappeared from fuse > mount point but still available

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-20 Thread Raghavendra Gowdappa
0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, > >=64=0.0% > issued rwt: total=0,131072,0, short=0,0,0, dropped=0,0,0 > latency : target=0, window=0, percentile=100.00%, depth=32 > > Run status group 0 (all jobs): > WRITE: bw=8583KiB/s (8789kB/s), 8583KiB/s-8583KiB/s (8789kB/s-

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Raghavendra Gowdappa
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailingli...@smcleod.net> wrote: > Hi Raghavendra, > > > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowd...@redhat.com> > wrote: > > Aggregating large number of small writes by write-behind into large writes >

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Raghavendra Gowdappa
On Tue, Mar 20, 2018 at 1:55 AM, TomK wrote: > On 3/19/2018 10:52 AM, Rik Theys wrote: > >> Hi, >> >> On 03/19/2018 03:42 PM, TomK wrote: >> >>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote: >>> Removing NFS or NFS Ganesha from the equation, not very impressed on my >>> own

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Raghavendra Gowdappa
On Mon, Mar 5, 2018 at 8:21 PM, Paul Anderson wrote: > Hi, > > tl;dr summary of below: flock() works, but what does it take to make > sync()/fsync() work in a 3 node GFS cluster? > > I am under the impression that POSIX flock, POSIX > fcntl(F_SETLK/F_GETLK,...), and POSIX

Re: [Gluster-users] Problems with write-behind with large files on Gluster 3.8.4

2018-02-26 Thread Raghavendra Gowdappa
+csaba On Tue, Feb 27, 2018 at 2:49 AM, Jim Prewett wrote: > > Hello, > > I'm having problems when write-behind is enabled on Gluster 3.8.4. > > I have 2 Gluster servers each with a single brick that is mirrored between > them. The code causing these issues reads two

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Raghavendra Gowdappa
Adding csaba On Tue, Mar 6, 2018 at 9:09 AM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > +Csaba. > > On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <p...@umich.edu> wrote: > >> Raghavendra, >> >> Thanks very much for your reply. >> >

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Raghavendra Gowdappa
o experiment > with data durability by killing various gluster server nodes during > the tests. > > If anyone would like our test scripts, I can either tar them up and > email them or put them in github - either is fine with me. (they rely > on current builds of docker and docke

Re: [Gluster-users] RDMA Client Hang Problem

2018-04-25 Thread Raghavendra Gowdappa
fec0:1b14 > remote address: LID 0x, QPN 0x0001e4, PSN 0x10090e, GID > fe80::ee0d:9aff:fec0:1dc8 > 8192000 bytes in 0.01 seconds = 8424.73 Mbit/sec > 1000 iters in 0.01 seconds = 7.78 usec/iter > > > Thank you. > > Necati. > > > On 25-04-2018 12:27,

Re: [Gluster-users] GlusterFS mount with user quotas

2018-04-25 Thread Raghavendra Gowdappa
+Sanoj. On Tue, Apr 24, 2018 at 12:18 PM, Bill Bill wrote: > I cannot seem to get the options for usrquota to work with the mount, > assuming this isn’t supported? I can set volume level quota however, our > GlusterFS is backing servers with user accounts that require quota

Re: [Gluster-users] [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
+Gluster-users On Mon, Oct 8, 2018 at 9:34 AM Raghavendra Gowdappa wrote: > > > On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa > wrote: > >> >> >> - Original Message - >> > From: "Pranith Kumar Karampuri" >> > To:

Re: [Gluster-users] duplicate performance.cache-size with different values

2018-11-12 Thread Raghavendra Gowdappa
On Mon, Nov 12, 2018 at 9:36 PM Davide Obbi wrote: > Hi, > > i have noticed that this option is repeated twice with different values in > gluster 4.1.5 if you run gluster volume get volname all > > performance.cache-size 32MB > ... > performance.cache-size 128MB

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-11 Thread Raghavendra Gowdappa
On Sun, Nov 11, 2018 at 11:41 PM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 8:31 PM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: >> >>> >>> >>> On Mon, No

Re: [Gluster-users] Directory selfheal failed: Unable to form layout for directory on 4.1.5 fuse client

2018-11-13 Thread Raghavendra Gowdappa
On Wed, Nov 14, 2018 at 3:04 AM mabi wrote: > Hi, > > I just wanted to report that since I upgraded my GluterFS client from > 3.12.14 to 4.1.5 on a Debian 9 client which uses FUSE mount I see a lot of > these entries for many different directories in the mnt log file on the > client: > >

Re: [Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa > wrote: > >> All, >> >> There is a patch [1] from Kotresh, which makes ctime generator as default >> in stack. Currently ctime generator is being

[Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
All, There is a patch [1] from Kotresh, which makes ctime generator as default in stack. Currently ctime generator is being recommended only for usecases where ctime is important (like for Elasticsearch). However, a reliable (c)(m)time can fix many consistency issues within glusterfs stack too.

Re: [Gluster-users] High CPU usage with 3 bricks

2018-10-04 Thread Raghavendra Gowdappa
On Fri, Oct 5, 2018 at 1:54 AM Tyler Salwierz wrote: > Hi, > > I have a server setup on a powerful processor (i7-8700). I have three > bricks setup on localhost - each their own hard drive. When I'm downloading > content at 100MB/s I see upwards of 180% CPU usage from GlusterFS. > > Uploading

Re: [Gluster-users] sharding in glusterfs

2018-10-02 Thread Raghavendra Gowdappa
On Sun, Sep 30, 2018 at 9:54 PM Ashayam Gupta wrote: > Hi Pranith, > > Thanks for you reply, it would be helpful if you can please help us with > the following issues with respect to sharding. > The gluster version we are using is *glusterfs 4.1.4 *on Ubuntu 18.04.1 > LTS > > >-

[Gluster-users] Update of work on fixing POSIX compliance issues in Glusterfs

2018-10-01 Thread Raghavendra Gowdappa
All, There have been issues related to POSIX compliance especially while running Database workloads on Glusterfs. Recently we've worked on fixing some of them. This mail is an update on that effort. The issues themselves can be classfied into

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-30 Thread Raghavendra Gowdappa
Normally client logs will give a clue on why the disconnections are happening (ping-timeout, wrong port etc). Can you look into client logs to figure out what's happening? If you can't find anything, can you send across client logs? On Wed, Aug 29, 2018 at 6:11 PM, Richard Neuboeck wrote: > Hi

  1   2   >