[zfs-discuss] Drive MTBF WAS: 350TB+ storage solution

2011-05-18 Thread Paul Kraus
On Mon, May 16, 2011 at 8:45 PM, Jim Klimov jimkli...@cos.ru wrote: If MTBFs were real, we'd never see disks failing within a year ;) Remember that MTBF (and MTTR and MTTDL) are *statistics* and not guarantees. If a type of drive has an MTBF of 10 years, then the MEAN (average) time between

[zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Paul Kraus
Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how stable (reliable) is ZFS on this platform ? This is for a home server and the reason I am asking is that about a year ago I bought some hardware based on it's inclusion on the

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Tim Cook
On Wed, May 18, 2011 at 7:47 AM, Paul Kraus p...@kraus-haus.org wrote: Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how stable (reliable) is ZFS on this platform ? This is for a home server and the reason I am asking is that

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Bob Friesenhahn
On Wed, 18 May 2011, Paul Kraus wrote: Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how stable (reliable) is ZFS on this platform ? This would be a very excellent question to ask on the related FreeBSD mailing list

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread a . smith
Hi, I am using FreeBSD 8.2 in production with ZFS. Although I have had one issue with it in the past but I would recommend it and I consider it production ready. That said if you can wait for FreeBSD 8.3 or 9.0 to come out (a few months away) you will get a better system as these will

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Freddie Cash
On Wed, May 18, 2011 at 5:47 AM, Paul Kraus p...@kraus-haus.org wrote:    Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how stable (reliable) is ZFS on this platform ? ZFSv15, as shipped with FreeBSD 8.3, is rock stable in our uses. We

Re: [zfs-discuss] 350TB+ storage solution

2011-05-18 Thread Chris Mosetick
The drives I just bought were half packed in white foam then wrapped in bubble wrap. Not all edges were protected with more than bubble wrap. Same here for me. I purchased 10 x 2TB Hitachi 7200rpm SATA disks from Newegg.com in March. The majority of the drives were protected in white foam.

Re: [zfs-discuss] 350TB+ storage solution

2011-05-18 Thread Rich Teer
On Wed, 18 May 2011, Chris Mosetick wrote: to go in the packing dept. I still love their prices! There's a reason fort at: you don't get what you don't pay for! -- Rich Teer, Publisher Vinylphile Magazine www.vinylphilemag.com ___ zfs-discuss

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Brandon High
On Wed, May 18, 2011 at 5:47 AM, Paul Kraus p...@kraus-haus.org wrote: P.S. If anyone here has a suggestion as to how to get Solaris to load I would love to hear it. I even tried disabling multi-cores (which makes the CPUs look like dual core instead of quad) with no change. I have not been

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Garrett D'Amore
We might have a better change of diagnosing your problem if we had a copy of your panic message buffer. Have you considered OpenIndiana and illumos as an option, or even NexentaStor if you are just looking for a storage appliance (though my guess is that you need more general purpose compute

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread Donald Stahl
Wow- so a bit of an update: With the default scrub delay: echo zfs_scrub_delay/K | mdb -kw zfs_scrub_delay:20004 pool0 14.1T 25.3T165499 1.28M 2.88M pool0 14.1T 25.3T146 0 1.13M 0 pool0 14.1T 25.3T147 0 1.14M 0 pool0 14.1T

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread George Wilson
Don, Try setting the zfs_scrub_delay to 1 but increase the zfs_top_maxinflight to something like 64. Thanks, George On Wed, May 18, 2011 at 5:48 PM, Donald Stahl d...@blacksun.org wrote: Wow- so a bit of an update: With the default scrub delay: echo zfs_scrub_delay/K | mdb -kw

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey New problem: I'm following all the advice I summarized into the OP of this thread, and testing on a test system. (A laptop). And it's just not working. I am jumping

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread Donald Stahl
Try setting the zfs_scrub_delay to 1 but increase the zfs_top_maxinflight to something like 64. The array is running some regression tests right now but when it quiets down I'll try that change. -Don ___ zfs-discuss mailing list

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread Donald Stahl
Try setting the zfs_scrub_delay to 1 but increase the zfs_top_maxinflight to something like 64. With the delay set to 1 or higher it doesn't matter what I set the maxinflight value to- when I check with: echo ::walk spa | ::print spa_t spa_name spa_last_io spa_scrub_inflight The value returned