If GlusterFS does not support POSIX seekdir,what problems will user or
GlusterFS have?
发自网易邮箱大师
On 11/03/2016 12:52, Raghavendra G wrote:
On Wed, Nov 2, 2016 at 9:38 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:
- Original Message -
> From: "Keiviw&qu
Hi,
In GlusterFS distributed volumes, listing a non-empty directory was slow.
Then I read the dht codes and found the reasons. But I was confused that
GlusterFS dht travesed all the bricks(in the volume) sequentially,why not use
multi-thread to read dentries from multiple bricks
d the experiment on a volume with a million files. The
client node's memory usage did grow, as I observed from the output of free(1)
http://paste.fedoraproject.org/422551/ when I did a `ls`.
-Ravi
On 09/02/2016 07:31 AM, Keiviw wrote:
Exactly, I mounted the volume in a no-brick node(n
edhat.com> 写道:
On 09/01/2016 01:04 PM, Keiviw wrote:
Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB,
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to
Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB,
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to
/mnt/glusterfs by FUSE. From my test, I excuted 'ls
le with O_DIRECT?
Also, what are the offsets and sizes of the writes on this file by this
application in the strace output?
-Krutika
On Mon, Jul 11, 2016 at 2:44 PM, Keiviw <kei...@163.com> wrote:
I have checked the page-aligned, i.e. the file was larger than one page, a part
of
wrong size. so I would
like to test out why exactly is it giving this problem. Please note that for
o-direct write to succeed, both offset and size should be page-aligned,
something like multiple of 512 is one way to check it.
On Sun, Jul 10, 2016 at 5:19 PM, Keiviw <kei...@163.com>
kherjee <amukh...@redhat.com> wrote:
Pranith/Krutika,
Your inputs please, IIRC we'd need to turn on some o_direct option here?
On Saturday 9 July 2016, Keiviw <kei...@163.com> wrote:
The errors also occured in GlusterFS 3.6.7,I just add the O_DIRECT flag in
client protocol open(
hi,
I have installed GlusterFS 3.3.0, and now I get Fsync failures when saving
files with the O_DIRECT flag in open() and create().
1, I tried to save a flie in vi and got this error:
"test" E667: Fsync failed
2, I see this in the logs:
[2016-07-07 14:20:10.325400]
hi,
I have some problems about linux-AIO. I have installed GlusterFS 3.6.7
because of its stability, and mounted to /mnt/glusterfs. When I excute "vi
/mnt/glusterfs/test.txt", it was ok and I wrote something successfully. But
when I changed the code in
hi,
I have found that VFS cache(inode, dentry cache) doesn't support
filesystems with FUSE.And then, FUSE kernel module provides metadata
cache(inode,dentry) for higher performance. In GlusterFS
3.3.0/xlators/mount/fuse/src/fuse-bridge.c, there were some options to enable
attribute and
hi,
I have tested the DHT to list the directory.The enviroment of this test
is: dir1-70/subdir1-subdir96/file1-96,dirX and subdirX were directories.I
excuted "time ls dir1" to list the subdir1-99,it toke 2.5s,while I excuted
"time ls subdir1" to list the file1-99,it toke 0.14s.Of
hi,
I have installed GlusterFS in node A,B,C.The node A and B were GlusterFS
servers(brick1-4),meanwhile,they were GlusterFS clients for creating files.And
the node C didn't exist bricks,it was just a GlusterFS client to mount by
"mount -t glusterfs nodeA:/XXX /mnt/glusterfs".
Here is
By default,the GlusterFS server will ignore the O_DIRECT flag. How to make the
server work in direct-io mode?? In GlusterFS 3.5.0,there is two options about
direct io,performance.strict-o-direct and network.remote-dio,which one affects
the GlusterFS server? And will they exist in GlusterFS
1,The GlusterFS server will ignore the O_DIRECT flag by default, how to make
the server work in direct-io mode?
2,By "mount -t glusterfs XXX:/testvol -o direct-io-mode=enable mountpoint",the
GlusterFS client will work in direct-io mode,but the file will be cached in the
hasded server,how to
By "mount -t glusterfs :/testvol -o direct-io-mode=true mountpoint",the
GlusterFS client will enable the direct io, and the file will not cached in the
GlusterFS client,but it won't work in the GlusterFS server. By defalut,the
GlusterFS will ignore the direct io flag. How to make the server
hi,
In the glusterfs client, execute "mount -t glusterfs volumename -o
direct-io-mode=enable mountpoint".
Does it mean that the client reads or writes the cluster files in the way of
DIRECT IO? If not,how to enable DIRECT IO?___
Gluster-devel mailing
hi,
I have installed GlusterFS 3.7.6 by yum and now I want to install GlusterFS
3.3.0 for testing. I execute 'yum remove glusterfs glusterfs-server' ,then
'gluster --version',it showed GlusterFS 3.7.6 built on .
How to uninstall the GlusterFS 3.7.6 and install 3.3.0?
Release version:
hi,
Only if a preload(a readdir request not initiated by application, but instead
triggered by readdir-ahead in an attempt to pre-emptively fill the
readdir-ahead buffer) is in process,the client will wait for its completion. If
the preload has completed, and the client's readdir requests wolud
hi,
I have two questions about the performance of ls in glusterfs,and I know that
the community has noticed the poor performance of ls and made something to
solve the problem,like readdir-ahead.Here are my questions.
1,The preload (a readdir request not initiated by application, but instead
ot; <rgowd...@redhat.com> wrote:
>
>
>- Original Message -
>> From: "Keiviw" <kei...@163.com>
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, February 23, 2016 6:20:28 AM
>> Subject: [Gluster-devel] Readdir-ahead
>>
>>
As Raghavendra Gowdappa have said, when the preload (a readdir request not
initiated by application, but instead triggered by readdir-ahead in an attempt
to pre-emptively fill the read-ahead buffer) is in progress, a readdir request
from application waits for its completion. In the code, when
, "Raghavendra Gowdappa" <rgowd...@redhat.com> wrote:
>
>
>- Original Message -
>> From: "Keiviw" <kei...@163.com>
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, February 23, 2016 6:20:28 AM
>> Subject: [Gluster-devel] Readdir-ahea
1,In the code,readdir-ahead didn't package up the readdir request into a bigger
request, it just packaged up the dentries, if the dentries' size was greater
than the request size, the bigger request returned to the client, wasn't it?
2,The requests from the Readdir-ahead Xlator wind down to next
ot; <rgowd...@redhat.com> wrote:
>
>
>- Original Message -
>> From: "Keiviw" <kei...@163.com>
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, February 23, 2016 6:20:28 AM
>> Subject: [Gluster-devel] Readdir-ahead
>>
>>
ot; <rgowd...@redhat.com> wrote:
>
>
>- Original Message -
>> From: "Keiviw" <kei...@163.com>
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, February 23, 2016 6:20:28 AM
>> Subject: [Gluster-devel] Readdir-ahead
>>
>>
I have two questions about the performance,readdir-ahead.
1. In the code,the request max_size is 131072,128K. If i change the max_size to
a larger size, what will happen?
2. As what i have said in question 1, for a larger buffer, if the second
readdir arrives, the denty will return from the
27 matches
Mail list logo