On Mon, 2 Mar 2009, Paul Hieromnimon wrote:

I am considering using Gluster to build a Xen hosting cluster/cloud. Xen
requires shared storage in order to do a "live migration" (move a virtual
machine from one host to another without taking it down) so that the vm's
disk image is available on the machine it gets moved to.  The typical
deployment scheme for this is a SAN, but a SAN is expensive both in terms of
the hardware, and rack space - compute node can hold 4TB worth of drives, so
why not use that instead of getting a big SAN box?

Not just expensive, but does not scale! When you build clusters you want to scale I/O with servers, you can't do that with a SAN. Our boxes are:

Hardware (6 nodes)
--------
Supermicro X7DWN+
Dual Xeon 5410 CPUs
32 GIG RAM
3ware 9650SE-ML16 (RAID6)
16 750G 32M cache drives MT25204 20 gbps Infiniband

Software
--------
Centos 5.2
Xen 3.3.0
Gluster (latest git)

I'd like to be able to use Gluster in such a way that:

1. The deployment is "symmetrical" - every machine is both a server and a
client.

yep!

2. I can add another machine to the pool and add its storage space to the
pool without having to take down the whole cluster.

gluster is working on that

3. Use Gluster for redundancy instead of RAID.  It would be nice if I can
lose any single hard drive and/or entire server and still have access to
100% of all the data in the pool.  In this sort of setup, is it possible to
limit the number of copies of data to 2 or 3, or if I have 10 machines, will
I be forced to have 10 copies of the data?

Sure, you can do almost anything you want, today there is not RAID like functionality, but that is on the roadmap. We use "replicate" on pairs of servers and then unify them together with "distribute".

An example of our 4 node test cluster is:

http://share.robotics.net/glusterfs.vol
http://share.robotics.net/glusterfsd.vol

4. Get good performance.  Will i get acceptable performance through gigabit
ethernet, or do I need 10 gigabit ethernet or infiniband to have something
decent?  Because I want a configuration where each machine is both a client
and a server, will performance degrade as I add more machine such that the
network needs to handle n^2 connections, where n is the number of servers?
Or will performance improve because data will be striped across a lot of
machines?

Infiniband is totally worth it, stuff is low cost (even can pick it up on ebay like we did) and has much lower latency then ethernet.

Now as far as "good performance" this is where I am having the most issues with Gluster. To make it work with xen you need --disable-direct-io-mode when you start up glusterfs. I am not saying this is the best way to test, but if you "dd if=/dev/zero of=test bs=1G count=8" we get:

XFS partition on 3ware          378 MB/s Not bad for writes!
Gluster default                 110 MB/s Expected more...
Gluster disable-direct-io-mode   22 MB/s OUCH!!!

The other issue we have is that we have so far only been able to use zen with file and not tap:aio (it stars, but never finishes domu boot).

5. How is OpenSolaris support?  I'd like to be able to use ZFS as the
underlying filesystem, although I'd be happy with Linux or NetBSD too.

No clue, we use Centos 5.2

6. I spoke to Hitesh and he said that files have an affinity for the machine
that's accessing them.  Suppose I live migrate a Xen vm to another machine,
will it's disc image eventually physically make its way over to that
machine?

This is an option, but we just let distribute throw them where it wants.

I would like to thank in advance anyone who answers these questions for me.
I am new to Gluster and distributed filesystems so I apologize if any of my
questions are stupid.

Did not see any. :)


<>
Nathan Stratton                                CTO, BlinkMind, Inc.
nathan at robotics.net                         nathan at blinkmind.com
http://www.robotics.net                        http://www.blinkmind.com

_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to