On Wed, 27 Jun 2012, Brian Candler wrote:

For a 16-disk array, your IOPS is not bad.  But are you actually storing a
VM image on it, and then doing lots of I/O within that VM (as opposed to
mounting the volume form within the VM)?  If so, can you specify your exact
configuration, including OS and kernel versions?

2.6.32-220.23.1.el6.x86_64

[root@virt01 ~]# gluster volume info share

Volume Name: share
Type: Distributed-Replicate
Volume ID: 09bfc0c3-e3d4-441b-af6f-acd263884920
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.59.0.11:/export
Brick2: 10.59.0.12:/export
Brick3: 10.59.0.13:/export
Brick4: 10.59.0.14:/export
Brick5: 10.59.0.15:/export
Brick6: 10.59.0.16:/export
Brick7: 10.59.0.17:/export
Brick8: 10.59.0.18:/export
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.nlm: off
auth.allow: *
nfs.disable: off

I did my tests on two quad-core/8GB nodes, 12 disks in each (md RAID10),
running ubuntu 12.04, and 10GE RJ45 direct connection.  The disk arrays
locally perform at 350MB/s for streaming writes.

Well I would first ditch ubuntu and install Centos, but..... My disk arrays are slow:

[root@virt01 ~]# dd if=/dev/zero of=foo bs=1M count=5k
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 26.8408 s, 200 MB/s

But doing a dd if=/dev/zero bs=1024k within a VM, whose image was mounted on
glusterfs, I was getting only 6-25MB/s.

[root@test ~]# dd if=/dev/zero of=foo bs=1M count=5k
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 172.706 s, 31.1 MB/s

While this is slower then what I would like it see, its faster then what I was getting to my NetApp and it scales better! :)

<>
Nathan Stratton
nathan at robotics.net
http://www.robotics.net
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to