On Thu, 28 Jun 2012, Brian Candler wrote:

On Wed, Jun 27, 2012 at 05:28:43PM -0500, Nathan Stratton wrote:
[root@virt01 ~]# dd if=/dev/zero of=foo bs=1M count=5k
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 26.8408 s, 200 MB/s

But doing a dd if=/dev/zero bs=1024k within a VM, whose image was mounted on
glusterfs, I was getting only 6-25MB/s.

[root@test ~]# dd if=/dev/zero of=foo bs=1M count=5k
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 172.706 s, 31.1 MB/s

That's what I consider unimpressive - slower than a single disk, when you
have an array of 16.  I should try a pair of drbd nodes as a fair comparison
though.

But wait, yes, I have 16 physical disks, but I am running distribute + replicate so the 8 physical boxes are broken up into 4 pairs of redundant boxes. When I do a write, I am writing on two servers, or 4 physical disks. So in my case, 31.1 MB/s vs about 200 MB/s native is not that bad.

DRDB is MUCH faster, but your not comparing apples to apples. DRBD has worked great for me in the past when I only needed two storage nodes to be mirrored in active/active, but as soon as you grow past that you need to look at something like Gluster. With GlusterFS my single write is slower at 31.1 MB/s, but I can do that many many more times over my 8 nodes without losing I/O.

Having said that, multiple clients running concurrently should be able to
use the remaining bandwidth, so the aggregate throughput should be fine.

Correct.

<>
Nathan Stratton
nathan at robotics.net
http://www.robotics.net
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to