Hi John:
    Glusterfs is not designed for handling large count  small files, because it 
has no meta data server, every lookup operation cost a lot in your situation.
    The disk usage is abnormal, does your disk only have gluster bricks?

Best Regards.
Jules Wang




At 2012-11-02 08:03:21,"Jonathan Lefman" <[email protected]> wrote:
Hi all,


I am having problems with painfully slow directory listings on a freshly 
created replicated volume.  The configuration is as follows:   2 nodes with 3 
replicated drives each.  The total volume capacity is 5.6T.  We would like to 
expand the storage capacity much more, but first we need to figure this problem 
out.


Soon after loading up about 100 MB of small files (about 300kb each), the drive 
usage is at 1.1T.  I am not sure if this to be expected.  The main problem is 
that directory listing (ls or find) takes a very long time.  The CPU usage on 
the nodes is high for each of the glusterfsd processes - 3 on each machine 54%, 
43%, and 25% per core is an example of the usage.  Memory is very low for each 
process.  It is incredibly difficult to diagnose this issue.  We have wiped 
previous gluster installs, all directories, and mount points as well as 
reformatting the disks.  Each drive is formatted with ext4.  


Has anyone had a similar result?  Any ideas on how to debug this one?


Thank you,


Jon

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to