Maxim Khitrov wrote:

> Greetings,
> I'm planning to build a new home file server for myself, starting with
> about 2TB of RAID6 space, but with room to grow in the future. Most of
> that will be on SATA drives, but I may throw in two SAS drives in
> RAID1 for the base OS, hence the SAS raid controller and enclosure.
> The highest priority for this build is data security, followed by
> performance and uptime.
> Rather than go for server-grade components, I thought that I should
> instead try to separate storage from the server itself. It's cheaper
> (sort of), easier to upgrade in the future, and if the server goes
> down for some reason, I can just put the raid card into another
> machine and once again have access to my data. The other advantage
> with this build is that I already have a Q6600 and some DDR2 memory
> around, so that will save me money on having to get Xeons and ECC
> memory. With that in mind, I currently have the following components
> picked out (listed below).
> I would like to know whether anyone has used any of these with FreeBSD
> 7.x, or if you have some other suggestions for what I should look into
> (am I asking for trouble by using these parts for a 24/7 file server
> in terms of stability)? I know that the 3ware controller should be
> supported, but I'm not sure about the Shuttle. How does FreeBSD play
> with X48 chipset? The drive enclosure obviously doesn't interact with
> the OS, but I'd still like your opinion on it or maybe some
> alternatives. Please let me know what you think.

I'm not really answering the direct question, per se, but there is a data 
point you may wish to know a little more about. There exists a difference in 
hard drives, ala "Enterprise" vs "Desktop". The difference is in the length 
of the timeout experienced when an error condition such as a platter sector 
read/write error and resultant remap.

Desktop drives have a fairly long period (something like 8, or more, 
seconds) while trying to handle the situation. With the "Enterprise" grade 
of drive this period is much shorter, something like 1 to 1.5 seconds max. 

Different hardware combinations ultimately behave differently, but the place 
where this matters most is with a RAID controller. A RAID controller is 
expecting this timeout to be very short. When paired with desktop drives 
sometimes a RAID controller will detach, or lose connection, to a drive and 
you may see lots of read_dma and/or write_dma errors. 

This is very problematic as it may not actually show itself for quite a 
while after drive(s) have been placed into service, e.g., everything will 
run just fine until a drive encounters the first time a sector fails and the 
drive remaps the sector to another location. A "Desktop" series of drive can 
take so long to handle this error condition that the controller assumes the 
entire drive is no longer present.

In a datacenter environment the "Enterprise" grade of drives are commonly 
used. It is when the home user plugs up desktop drives to a RAID controller 
is where this problem is most likely to surface. It doesn't in all 
situations, as many people have done just this and experienced no trouble at 
all. Just one small data point to consider.


_______________________________________________ mailing list
To unsubscribe, send any mail to ""

Reply via email to