If your main purpose is to have more handy access to your data then what
you should concern yourself with is the storage array.  The front end
servers don't really matter as long as they just exist.

Typically I'd get two front end servers and put them in a cluster with two
seperate external disk array devices behind, each striped and both devices
in a mirror.  That's pretty bulletproof...  you got your failover and
redundancy.

Most vendors sell JBODs and I don't think there is too much difference
between them.  You would have to pick some sort of software RAID to get
these to work.  Vendors also sell the smart arrays with on board RAID
controlers.  These all work differently so I'd stick with software RAID.

I'm really only familiar with the commercial stuff like Veritas Cluster
and the Veritas Volume Manager but i'm sure there is an opensource way to
do it.

-e

On Mon, 15 Aug 2005, John Hunter wrote:

>
> We have a lot of data on CDs, maybe 10TB or so.  We want to build a
> server to dump all of this data onto so the data can be accessed more
> conviently over http.  I'm thinking a bunch of rackspace linux
> commodity boxes with lots of RAID hotswap hard drives.  The data files
> are 100MB-300B, and the server's job will mainly be to serve up these
> largish files.  This is a research machine, so there will typically be
> at most a few clients accessing data at a time.
>
> Any particular vendors, architectures, products, etc I should be aware
> of?  This is my first venture into rackspace, so I'm pretty much a
> newbie.
>
>
> JDH
> _______________________________________________
> Bits mailing list
> [email protected]
> http://www.sugoi.org/mailman/listinfo/bits
>
_______________________________________________
Bits mailing list
[email protected]
http://www.sugoi.org/mailman/listinfo/bits

Reply via email to