Re: [Gluster-devel] A question of GlusterFS dentries!

2016-11-03 Thread Keiviw
If GlusterFS does not support POSIX seekdir,what problems will user or GlusterFS have? 发自网易邮箱大师 On 11/03/2016 12:52, Raghavendra G wrote: On Wed, Nov 2, 2016 at 9:38 AM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: - Original Message - > From: "Keiviw&qu

[Gluster-devel] A question of GlusterFS dentries!

2016-11-01 Thread Keiviw
Hi, In GlusterFS distributed volumes, listing a non-empty directory was slow. Then I read the dht codes and found the reasons. But I was confused that GlusterFS dht travesed all the bricks(in the volume) sequentially,why not use multi-thread to read dentries from multiple bricks

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-06 Thread Keiviw
d the experiment on a volume with a million files. The client node's memory usage did grow, as I observed from the output of free(1) http://paste.fedoraproject.org/422551/ when I did a `ls`. -Ravi On 09/02/2016 07:31 AM, Keiviw wrote: Exactly, I mounted the volume in a no-brick node(n

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Keiviw
edhat.com> 写道: On 09/01/2016 01:04 PM, Keiviw wrote: Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metadata like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to

[Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Keiviw
Hi, I have found that GlusterFS client(mounted by FUSE) didn't cache metadata like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to /mnt/glusterfs by FUSE. From my test, I excuted 'ls

Re: [Gluster-devel] How to solve the FSYNC() ERR

2016-07-11 Thread Keiviw
le with O_DIRECT? Also, what are the offsets and sizes of the writes on this file by this application in the strace output? -Krutika On Mon, Jul 11, 2016 at 2:44 PM, Keiviw <kei...@163.com> wrote: I have checked the page-aligned, i.e. the file was larger than one page, a part of

Re: [Gluster-devel] How to solve the FSYNC() ERR

2016-07-11 Thread Keiviw
wrong size. so I would like to test out why exactly is it giving this problem. Please note that for o-direct write to succeed, both offset and size should be page-aligned, something like multiple of 512 is one way to check it. On Sun, Jul 10, 2016 at 5:19 PM, Keiviw <kei...@163.com>

Re: [Gluster-devel] How to solve the FSYNC() ERR

2016-07-10 Thread Keiviw
kherjee <amukh...@redhat.com> wrote: Pranith/Krutika, Your inputs please, IIRC we'd need to turn on some o_direct option here? On Saturday 9 July 2016, Keiviw <kei...@163.com> wrote: The errors also occured in GlusterFS 3.6.7,I just add the O_DIRECT flag in client protocol open(

[Gluster-devel] How to solve the FSYNC() ERR

2016-07-09 Thread Keiviw
hi, I have installed GlusterFS 3.3.0, and now I get Fsync failures when saving files with the O_DIRECT flag in open() and create(). 1, I tried to save a flie in vi and got this error: "test" E667: Fsync failed 2, I see this in the logs: [2016-07-07 14:20:10.325400]

[Gluster-devel] GlusterFS linux-AIO bug

2016-07-07 Thread Keiviw
hi, I have some problems about linux-AIO. I have installed GlusterFS 3.6.7 because of its stability, and mounted to /mnt/glusterfs. When I excute "vi /mnt/glusterfs/test.txt", it was ok and I wrote something successfully. But when I changed the code in

[Gluster-devel] How to enbale the fuse cache in kernel?

2016-06-29 Thread Keiviw
hi, I have found that VFS cache(inode, dentry cache) doesn't support filesystems with FUSE.And then, FUSE kernel module provides metadata cache(inode,dentry) for higher performance. In GlusterFS 3.3.0/xlators/mount/fuse/src/fuse-bridge.c, there were some options to enable attribute and

[Gluster-devel] Glusterfs ls test

2016-06-27 Thread Keiviw
hi, I have tested the DHT to list the directory.The enviroment of this test is: dir1-70/subdir1-subdir96/file1-96,dirX and subdirX were directories.I excuted "time ls dir1" to list the subdir1-99,it toke 2.5s,while I excuted "time ls subdir1" to list the file1-99,it toke 0.14s.Of

[Gluster-devel] GlusterFS Client doesn't cache the dentry??

2016-06-26 Thread Keiviw
hi, I have installed GlusterFS in node A,B,C.The node A and B were GlusterFS servers(brick1-4),meanwhile,they were GlusterFS clients for creating files.And the node C didn't exist bricks,it was just a GlusterFS client to mount by "mount -t glusterfs nodeA:/XXX /mnt/glusterfs". Here is

[Gluster-devel] Network.remote-dio

2016-06-18 Thread Keiviw
By default,the GlusterFS server will ignore the O_DIRECT flag. How to make the server work in direct-io mode?? In GlusterFS 3.5.0,there is two options about direct io,performance.strict-o-direct and network.remote-dio,which one affects the GlusterFS server? And will they exist in GlusterFS

[Gluster-devel] DIRECT IO

2016-06-17 Thread Keiviw
1,The GlusterFS server will ignore the O_DIRECT flag by default, how to make the server work in direct-io mode? 2,By "mount -t glusterfs XXX:/testvol -o direct-io-mode=enable mountpoint",the GlusterFS client will work in direct-io mode,but the file will be cached in the hasded server,how to

[Gluster-devel] How to enable direct io??

2016-06-17 Thread Keiviw
By "mount -t glusterfs :/testvol -o direct-io-mode=true mountpoint",the GlusterFS client will enable the direct io, and the file will not cached in the GlusterFS client,but it won't work in the GlusterFS server. By defalut,the GlusterFS will ignore the direct io flag. How to make the server

[Gluster-devel] How to enable DIRECT IO?

2016-03-22 Thread Keiviw
hi, In the glusterfs client, execute "mount -t glusterfs volumename -o direct-io-mode=enable mountpoint". Does it mean that the client reads or writes the cluster files in the way of DIRECT IO? If not,how to enable DIRECT IO?___ Gluster-devel mailing

[Gluster-devel] Uninstall GlusterFS

2016-03-09 Thread Keiviw
hi, I have installed GlusterFS 3.7.6 by yum and now I want to install GlusterFS 3.3.0 for testing. I execute 'yum remove glusterfs glusterfs-server' ,then 'gluster --version',it showed GlusterFS 3.7.6 built on . How to uninstall the GlusterFS 3.7.6 and install 3.3.0? Release version:

[Gluster-devel] GlusterFS readdir-ahead

2016-03-04 Thread Keiviw
hi, Only if a preload(a readdir request not initiated by application, but instead triggered by readdir-ahead in an attempt to pre-emptively fill the readdir-ahead buffer) is in process,the client will wait for its completion. If the preload has completed, and the client's readdir requests wolud

[Gluster-devel] Two questions about the performance of ls in glusterfs

2016-02-29 Thread Keiviw
hi, I have two questions about the performance of ls in glusterfs,and I know that the community has noticed the poor performance of ls and made something to solve the problem,like readdir-ahead.Here are my questions. 1,The preload (a readdir request not initiated by application, but instead

Re: [Gluster-devel] Readdir-ahead

2016-02-25 Thread Keiviw
ot; <rgowd...@redhat.com> wrote: > > >- Original Message - >> From: "Keiviw" <kei...@163.com> >> To: gluster-devel@gluster.org >> Sent: Tuesday, February 23, 2016 6:20:28 AM >> Subject: [Gluster-devel] Readdir-ahead >> >>

[Gluster-devel] Can readdir-ahead be asynchronous?

2016-02-25 Thread Keiviw
As Raghavendra Gowdappa have said, when the preload (a readdir request not initiated by application, but instead triggered by readdir-ahead in an attempt to pre-emptively fill the read-ahead buffer) is in progress, a readdir request from application waits for its completion. In the code, when

Re: [Gluster-devel] Readdir-ahead

2016-02-22 Thread Keiviw
, "Raghavendra Gowdappa" <rgowd...@redhat.com> wrote: > > >- Original Message - >> From: "Keiviw" <kei...@163.com> >> To: gluster-devel@gluster.org >> Sent: Tuesday, February 23, 2016 6:20:28 AM >> Subject: [Gluster-devel] Readdir-ahea

[Gluster-devel] readdir-ahead questions

2016-02-22 Thread Keiviw
1,In the code,readdir-ahead didn't package up the readdir request into a bigger request, it just packaged up the dentries, if the dentries' size was greater than the request size, the bigger request returned to the client, wasn't it? 2,The requests from the Readdir-ahead Xlator wind down to next

Re: [Gluster-devel] Readdir-ahead

2016-02-22 Thread Keiviw
ot; <rgowd...@redhat.com> wrote: > > >- Original Message - >> From: "Keiviw" <kei...@163.com> >> To: gluster-devel@gluster.org >> Sent: Tuesday, February 23, 2016 6:20:28 AM >> Subject: [Gluster-devel] Readdir-ahead >> >>

Re: [Gluster-devel] Readdir-ahead

2016-02-22 Thread Keiviw
ot; <rgowd...@redhat.com> wrote: > > >- Original Message - >> From: "Keiviw" <kei...@163.com> >> To: gluster-devel@gluster.org >> Sent: Tuesday, February 23, 2016 6:20:28 AM >> Subject: [Gluster-devel] Readdir-ahead >> >>

[Gluster-devel] Readdir-ahead

2016-02-22 Thread Keiviw
I have two questions about the performance,readdir-ahead. 1. In the code,the request max_size is 131072,128K. If i change the max_size to a larger size, what will happen? 2. As what i have said in question 1, for a larger buffer, if the second readdir arrives, the denty will return from the