Run solaris x86! The site I used to work at has been using it
exclusively on its afs servers since sol 10 came out.
-rob
On Nov 30, 2007, at 13:51, Stephen Joyce <[EMAIL PROTECTED]>
wrote:
I don't have money for FC or a SAN, so I've stuck with DAS. I've had
good experience with building many smallish servers rather than one
big or expensive one.
I'm currently using cheap Dell PowerEdge servers running linux. I
think we got them for about $800/ea, and they support console
redirection (critical when you have lots of physical servers). We
added a 2-port 3ware raid1 for the OS and a 4-port 3ware for the
data (raid1 or raid5 depending on the requirements). Right now I'm
keeping the servers to around 1TB each, but they're capable of
hosting 2-4TB each (depending on raid level) with the largest
current drives.
If money were no object, I'd have opted for hot-swappable drives,
but with under 1TB of data on each, any time I've needed to replace
a drive I've just moved the volumes to another server.
These systems are cheap enough (under about $1.5K each for
everything) that I keep a spare of everything just in case (spare
fully configured and running server plus spare raid cards and drives
on the shelf).
I _strongly_ advise raid. Raid1 for the OS and raid1, 5, or 6 for
the data, depending on your requirements. I know some people have
reported impressive results with linux software raid, but I swear by
3ware hardware raid controllers; they "just work." Just avoid
"fakeraid" controller cards (promise, low-end adaptec, etc) like the
plague. They're far more trouble than they're worth.
I really like solaris, but this setup is MUCH cheaper and faster
than our old solaris setup.
On Fri, 30 Nov 2007, Jason Edgecombe wrote:
Hi everyone,
Traditionally, we have used direct-attached scsi disk packs on Sun
Sparc
servers running Solaria 9 for OpenAFS. This has given us the most
bang
for the buck. We forgo RAID because we have the backup capabilities
of AFS.
What types of storage technologies are other AFS sites using for
their
AFS vicep partitions? We need to figure our future direction for the
next couple of years. Fibre channel seems all the rage, but it's
quite
expensive. I'm open to any and all feedback. What works? What
doesn't?
What offers the best bang for the buck on an OpenAFS server?
This is for an academic environment that fills both academic and
research needs. Researchers are asking for lots of AFS space (200GB
+).
Of course this needs to be backed up as well.
Thanks,
Jason
Cheers, Stephen
--
Stephen Joyce
Systems Administrator P A
N I C
Physics & Astronomy Department Physics &
Astronomy
University of North Carolina at Chapel Hill Network
Infrastructure
voice: (919) 962-7214 and
Computing
fax: (919) 962-0480 http://www.panic.unc.edu
Don't judge a book by its movie.
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info