On 08/11/2011 10:36 AM, Jean-Francois Chevrette wrote:
Hello everyone,

I have just began playing with GlusterFS 3.2 on a debian squeeze
system. This system is a powerful quad-core xeon with 12GB of RAM and
two 300GB SAS 15k drives configured as a RAID-1 on an Adaptec 5405
controller. Both servers are connected through a crossover cable on
gigabit ethernet ports.

I installed the latest GlusterFS 3.2.2 release from the provided
debian package.

As an initial test, I've created a simple brick on my first node:

gluster volume create brick transport tcp node1.internal:/brick

I started the volume and mounted it locally

mount -t glusterfs 127.0.0.1:/brick /mnt/brick

I can an iozone test on both the underlying partition and the
glusterfs mountpoint. Here are my results for the random write test
(results are in ops/sec):

[...]

(sorry if the formatting is messed)


Any ideas why I am getting such bad results? My volume is not even
replicated or distributed yet!

You are not getting "bad" results. The results from the local fs w/o gluster are likely completely cached. This is a very small test, and chances are you'r IOs aren't even making it out to the device before the test completes.

The only test in your results which is likely generating any sort of realistic IO is that very last row and last column data size.

A 15k RPM disk will do ~300 IOPs, which is about what you should see per unit. For a RAID1 across 2 such disks, you should get (depending upon how you built the RAID1 and what the underlying RAID system is), from 150-600 IOPs in most cases.


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: [email protected]
web  : http://scalableinformatics.com
       http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to