You've received a greeting from a family member!

2007-08-18 Thread egreetings.com
You have just received a virtual postcard from a family member! . You can pick up your postcard at the following web address: . [1]http://www2.postcards.org/?a91-valets-cloud-31337 . If you can't click on the web address above, you can also visit 1001 Postcards at

Re: large RAID volume partition strategy

2007-08-18 Thread Thomas Hurst
* Vivek Khera ([EMAIL PROTECTED]) wrote: I'll investigate this option. Does anyone know the stability reliability of the mpt(4) driver on CURRENT? Is it out of GIANT lock yet? It was hard to tell from the TODO list if it is entirely free of GIANT or not. Yes, mpt(4) was made MPSAFE in

Re: large RAID volume partition strategy

2007-08-18 Thread Matthew Seaman
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Clayton Milos wrote: If you want awesome performance and reliability the real way to go is RAID10 (or more correctly RAID 0+1). RAID10 and RAID0+1 are very different beasts. RAID10 is the best choice for a read/write intensive f/s with valuable

Re: large RAID volume partition strategy

2007-08-18 Thread Torfinn Ingolfsen
On Fri, 17 Aug 2007 21:50:53 -0400 Vivek Khera [EMAIL PROTECTED] wrote: My only fear of this is that once this system is in production, that's pretty much it. Maintenance windows are about 1 year apart, usually longer. Seems to me you really should want a redundant / clustered system,

Recent PAM changes worth an UPDATING entry?

2007-08-18 Thread Doug Barton
Howdy, I just rebuilt and installed my world on my 6-stable box, and ran into a snag. Like a lot of users I use -DNO_CLEAN in buildworld since this is a very slow box that I use mostly as a file/dns server. After rebooting I could ssh in ok (probably because I don't use PAM for sshd) but

Re: large RAID volume partition strategy

2007-08-18 Thread Vivek Khera
On Aug 18, 2007, at 4:09 AM, Thomas Hurst wrote: Best temper your fear with some thorough testing then. If you are going to use ZFS in such a situation, though, I might be strongly tempted to use Solaris instead. Why the long gaps between maintenance? This is a DB server for a 24x7