[Gluster-users] Gluster with iNotify

2012-09-10 Thread Kent Liu
Hi there, Is there any news on inotify support for GlusterFS? I see this in 3.4 list but I didn't see any progress anywhere. http://www.gluster.org/community/documentation/index.php/Planning34/Inotify Thanks, Kent ___ Gluster-users mailing

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Brian Candler
On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote: While trying to figure out the cause of the bottleneck i've realised that the bottle neck is coming from the client side as running concurrent test from two clients would give me about 650mb/s per each client.

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Stephan von Krawczynski
On Mon, 10 Sep 2012 08:48:03 +0100 Brian Candler b.cand...@pobox.com wrote: On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote: While trying to figure out the cause of the bottleneck i've realised that the bottle neck is coming from the client side as running

Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox

2012-09-10 Thread Yannik Lieblinger
Hi Joe, i know you have much to do. But do you have already opened a bug report? -- Best regards Yannik Lieblinger Am Dienstag, 28. August 2012 18:44 CEST, Joe Julian j...@julianfamily.org schrieb: I was able to cause client crashes with the 3.5 kernels in Fedora and have opened a bug

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread Brian Candler
On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote: below patch should fix your bug. Thank you Jack - that was a very quick response! I'm building a new kernel with this patch now and will report back. However, I think the existence of this bug suggests that Linux with software RAID is

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Brian Candler
On Mon, Sep 10, 2012 at 10:03:14AM +0200, Stephan von Krawczynski wrote: Yes - so in workloads where you have many concurrent clients, this isn't a problem. It's only a problem if you have a single client doing a lot of sequential operations. That is not correct for most cases. GlusterFS

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread Stephan von Krawczynski
On Mon, 10 Sep 2012 09:39:18 +0100 Brian Candler b.cand...@pobox.com wrote: On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote: below patch should fix your bug. Thank you Jack - that was a very quick response! I'm building a new kernel with this patch now and will report back.

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread Jack Wang
2012/9/10 Stephan von Krawczynski sk...@ithnet.com: On Mon, 10 Sep 2012 09:39:18 +0100 Brian Candler b.cand...@pobox.com wrote: On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote: below patch should fix your bug. Thank you Jack - that was a very quick response! I'm building a new

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Stephan von Krawczynski
On Mon, 10 Sep 2012 09:44:26 +0100 Brian Candler b.cand...@pobox.com wrote: On Mon, Sep 10, 2012 at 10:03:14AM +0200, Stephan von Krawczynski wrote: Yes - so in workloads where you have many concurrent clients, this isn't a problem. It's only a problem if you have a single client doing a

[Gluster-users] libgfapi directory operation supported?

2012-09-10 Thread 符永涛
Dear gluster experts, I'm trying to use glusterfs libgfapi to write glusterfs application. But I fail to find any directory related functions in libgfapi. Any one gives me a clue? Thank you very much. -- 符永涛 ___ Gluster-users mailing list

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread Brian Candler
On Mon, Sep 10, 2012 at 11:03:41AM +0200, Stephan von Krawczynski wrote: Brian, please re-think this. What you call a stable kernel (Ubuntu 3.2.0-30) is indeed very old. I am talking about the official kernel for the Ubuntu 12.04 Long Term Support server release. If you're saying that Ubuntu

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Whit Blauvelt
On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote: If you have small files you are busted, if you have workload on the clients you are busted and if you have lots of concurrent FS action on the client you are busted. Which leaves you with test cases nowhere near real

Re: [Gluster-users] Gluster with iNotify

2012-09-10 Thread Whit Blauvelt
Hi, I don't know about Gluster support, but I use inotify via incrontab to watch mounted Gluster filesystems and it works well. Most of my use of it is just triggering scripts when new files arrive. See: http://inotify.aiken.cz/?section=incronpage=doclang=en There are limitations. It has to

[Gluster-users] 回复: Gluster with iNotify

2012-09-10 Thread Kent Liu
Whit, thanks for your reply. Yes, I am looking for the support of the second case. Seems it's in the plan for 3.4 BRs, Kent -- Kent Liu kur...@outlook.com 在 2012年9月10日星期一,下午8:19,Whit Blauvelt 写道: Hi, I don't know about Gluster support, but I use inotify via incrontab to watch

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Stephan von Krawczynski
On Mon, 10 Sep 2012 08:06:51 -0400 Whit Blauvelt whit.glus...@transpect.com wrote: On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote: [...] If you're lucky you reach something like 1/3 of the NFS performance. [Gluster NFS Client] Whit There is a reason why one

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Fernando Frediani (Qube)
Well, I would say there is a reason, if the Gluster client performed as expected. Using the Gluster client it should in theory access the file(s) directly from the nodes where they reside and not having to go though a single node exporting the NFS folder which would then have to gather the

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Kaleb S. KEITHLEY
On 09/10/2012 08:56 AM, Stephan von Krawczynski wrote: On Mon, 10 Sep 2012 08:06:51 -0400 Whit Blauvelt whit.glus...@transpect.com wrote: On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote: [...] If you're lucky you reach something like 1/3 of the NFS performance.

Re: [Gluster-users] Throughout over infiniband

2012-09-10 Thread Washer, Bryan
htmlbodyEveryone, This is just a response to the issue of nfs vs glusterfs and the performance for glsuter as I think some of the information may be useful here and has not been discussed. For the sake of clarity, I do not run infiniband..but I am running 10GB. My normal production speeds

[Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Manhong Dai
Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often.

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread harry mangalam
We're using 3ware Inc 9750 SAS2/SATA-II RAID controllers in a 4-brick, 400TB gluster system. The 4 have performed very well overall in about 6mo of production work, alerting us to problem disks, etc. Tho 3ware is an LSI product now, this model retains the familiar if somewhat grunty 3dm2

Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox

2012-09-10 Thread Jason Brooks
On 09/10/2012 01:06 AM, Yannik Lieblinger wrote: Hi Joe, i know you have much to do. But do you have already opened a bug report? Looks like this is it: https://bugzilla.redhat.com/show_bug.cgi?id=852224 -- @jasonbrooks ___ Gluster-users

Re: [Gluster-users] XFS and MD RAID

2012-09-10 Thread Brian Candler
On Mon, Sep 10, 2012 at 09:39:18AM +0100, Brian Candler wrote: On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote: below patch should fix your bug. Thank you Jack - that was a very quick response! I'm building a new kernel with this patch now and will report back. It has been

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Anand Avati
On Mon, Sep 10, 2012 at 8:30 AM, Manhong Dai da...@umich.edu wrote: ** Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to

[Gluster-users] 3.3 Quota Problems

2012-09-10 Thread Ling Ho
I am trying to use directory quota in our environment and face two problems: 1. When new quota is set on a directory, it doesn't take effect until the volume is remounted on the client. This is a major inconvenience. 2. If I add a new quota, quota stops working on the client. This is how to

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Dai, Manhong
Hi Avati, Thanks a lot! In my case, the application that tries to create a new file is not inside the folder. I write a simple bash scrip to demo this problem. #!/bin/bash FOLDER=/home/mengf_lab/daimh/temp/testdir for ((i=0; i100; i++)) do echo ###$i### ssh mengf-n1 rm -r

Re: [Gluster-users] 3.3 Quota Problems

2012-09-10 Thread Ling Ho
I found these in the client log: [2012-09-10 18:56:07.728206] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed [2012-09-10 18:56:08.751162] D [glusterfsd-mgmt.c:1441:is_graph_topology_equal] 0-glusterfsd-mgmt: graphs are equal [2012-09-10 18:56:08.751175] D

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Pranith Kumar Karampuri
Manhong Dai, Thanks for the script. Could you give the volume configuration also so that we can re-create the problem in our setups. Pranith. - Original Message - From: Manhong Dai da...@umich.edu To: Anand Avati anand.av...@gmail.com Cc: gluster-users@gluster.org Sent: Tuesday,

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Anand Avati
This is a limitation of the 'handle' nature of FUSE filesystems. You will have to set a lower entry-timeout (mount option) to fix this problem. Avati On Mon, Sep 10, 2012 at 5:13 PM, Dai, Manhong da...@umich.edu wrote: Hi Avati, Thanks a lot! In my case, the application that tries to

Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine

2012-09-10 Thread Anand Avati
Also, I find it very suspect that 3.2.x did not have the same behavior! Avati On Mon, Sep 10, 2012 at 8:53 PM, Anand Avati anand.av...@gmail.com wrote: This is a limitation of the 'handle' nature of FUSE filesystems. You will have to set a lower entry-timeout (mount option) to fix this