Hi Mike,

To agree and add to what James McP. Said, and share some experiences:

Your proposed hardware config is pretty much my home server, although I used 
500GB 7200rpm drives, you may want to use larger ones. 1TB drives are now about 
the same price since 1.5TB drives are now available. From a cooling perspective 
I got a $30 Cooler Master case with a filtered mesh front from Fry's. I blocked 
all the holes except the front in front of the drives and a single 120mm fan at 
the back. The drives are spaced with gaps above and below and with rubber 
grommets to the drive cage extenders I used for mounting. The "wind tunnel" 
effect thus created moves a good volume of air at a slow speed past the drives. 
The large fan doesn't ever get cranked to more than 25% so between that, the 
filter foam deadening the front and the slow airflow: it's pretty quiet. It 
sits in my home office on a desk, and the office ceiling fan makes more noise.

The rest of the hardware is what was on sale at Fry's that week: Workstation 
mobo with 4 SATA ports, Core Duo CPU and a $30 SIL based "RAID" PCI-e 1X card 
for another 2 SATA ports. I started off headless with the onboard video, but 
added a PCI-e off-brand NVIDIA card later because I wanted to mess around with 
Sun's Shared Visualization software. When I moved the SATA card (and 
accidentally swapped the cables), it changed 2 of the HDD physical addressees. 
ZFS didn't even burp on reboot. (Take that LVM and md! :-) By default install 
the drives are spun down when not in use reducing thermal load and power 
consumption. I got a 600W PSU as I wanted to run it at low load for low heat 
and noise, but not worry about surges when spinning all the drives up. The NIC 
is a $10 PCI-e REALtek Gigabit. The CPU uses the Intel stock retail cooler and 
rarely gets above 40degC. I can run with the CPU fan disconnected and the "wind 
tunnel" cooling the CPU passively, but the BIOS bitches and I can
 't seem to turn off the error message stream.

If I had to do it over the only thing I'd do differently is get a mobo with 6 
(or more) SATA ports and a chipset that let's me use >3.75 of my dual 2GB 
DIMMs. That'll wait for the next bargain deal at Fry's/Egghead/wherever and 
I'll probably be installing OSol 2008.next by then. Since OSol wasn't out at 
the time I'm running Solaris Express DE (with a manual ZFS root) and there's no 
quick upgrade to OSol. So I'll probably wait for my next major hardware change 
and do a reinstall of the root pool. My data zpool export and import will be 
trivial OTOH. 

My rule of thumb for performance is that a zpool mirror or RAID group will 
perform like a single drive. Also remember it's just:

zpool add poolname cache devname

to add a fast flash/HDD device as cache. A USB device works OK, but I found the 
spare IDE port on the mobo to be best (CF to IDE widgets are cheap too). I have 
a DVD-RW on my IDE as well.

If your Ethernet switch is capable of trunking/bonding/Etherchannel, that "just 
works" out of the box with OSol too if you need more bandwidth. Ditto for ZFS' 
integrated NFS and kernel CIFS support. Webmin's available as a package to help 
avoid CLI schizophrenia if you use multiple UNIX variants daily. "svcs -a | 
gripe damname" and svcadm are also your friends on a regular basis. (Turning 
off X on a fileserver for instance.) The default OSol install also sets up 
"root" as a RBAC role, not an actual user. If you want a "real" root skip user 
account creation in the install.

The only thing I couldn't find as a package at the time was APCupsd  for the 
server's UPS. Download, run configure script, select options, install. See the 
apcupsd forums for details. 

I ran into a weirdness with the SiL "RAID" card: it would let me configure 
JBODs, so I have two 1 disk RAID groups configured on the card. Weird, but it 
works and I guess that's what I get for being cheap, err, frugal. :-)

Offline, a lot of the local Sun/Solaris User Groups run installfests on a  
regular basis, which can be a shortcut and save much gnashing of teeth and hair 
pulling.

Hope that all helps! Best of Luck!

 James

Sidebar: I don't do the blog thing. (Feel's like I'm the only person at Sun who 
doesn't!) So anyone who does, feel free to cut and paste/whatever anything 
useful from the above.


-----Original Message-----
From: James C. McPherson <[EMAIL PROTECTED]>
Sent: Sunday, 20 Jul 2008 07:13 AM
To: mike <[EMAIL PROTECTED]>
Cc: [email protected]
Subject: Re: [storage-discuss] Quietest 6+ drive box for ZFS/rsync

Hi Mike,

mike wrote:
> I'm looking for some suggestions here - I am an OSOL noob. I have used
> Linux mainly, and some FreeBSD. Originally I planned on using FreeBSD for
> this, but am a bit nervous about it's ZFS support, might as well run it
> on the native kernel, and the upcoming OSOL build has a lot of neat
> sounding features.

If I wasn't running OpenSolaris I'd be using FreeBSD. I've got a lot of
confidence in Pawel's ZFS port.

> Basically, my plan is to build a machine I can stick at home that will do
> nightly rsyncs to servers that I administer for various reasons. When the
> rsync is done, create a snapshot. Essentially nightly snapshots on the
> receiver side, since I cannot change the OS/filesystem on the remote
> servers.

I don't know that there's a "when rsync is finished run script Z"
sort of facility (though I reckon it would be handy, and as well as
for ZFS recv), so I'd suggest just using zfs snapshots kicked off by
cron at a period you determine to be most appropriate.

> I'd plan on setting up one ZFS filesystem for each server. 15 at the most
> to start. Probably wouldn't grow very much more than that.
> 
> I'm looking at a few million files, no more than 2TB of data to start.
> But I'd want plenty of room to grow. I also might decide to use this
> machine for home media storage as well (so I'd be using CIFS and/or NFS
> clients to access it)
> 
> I'm looking for good hardware suggestions, I'd want 6 drives minimum. The
> main thing is finding a chassis that will keep all of this quiet. Then
> finding the right motherboard and/or extra SATA controller for more ports
> (if needed) with well-supported chipsets, etc.

A lot of people posting here appear to like chassis from SuperMicro.
You might also want to consider an external enclosure; I've had good
perf with a 4 disk Proavio Enhance 4-ML unit hanging off an LSI SAS
controller. Proavio also do 8 disk units.


> Services: ssh, ftp (maybe), cifs, nfs (maybe)
> How much RAM would you suggest for this? I'm thinking 4GB should handle
> these needs, but I have never adminned/dealt with a Solaris machine
> before.

If that's all you're doing, 4Gb should be plenty. At least for today ;-)

> I'd be planning on running this on an Intel Core2 architecture. Would a
> dual-core suffice? I assume so.

ZFS eats address space for breakfast, lunch and dinner, so 64bit is
the main concern, really. If you can pack in more cores, so much the
better for you. Don't forget to spec a good nic too.

> I'd be initiating the rsyncs at random times throughout the day - so it
> won't be one huge hit at once, if that helps at all. 
> Any help is appreciated. FYI, I got more tempted to jump ship and try
> OSOL for storage after reading this:
> http://elektronkind.org/2008/07/opensolaris-2008-11-storage

Just remember that ZFS snapshots are cheap (*very* cheap) and you can
create as many as you want. For all practical definitions of "as many
as you want" though surely 2^48 would be enough ;)


Finally, welcome to the community.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp       http://www.jmcp.homeunix.com/blog
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to