Re: [zfs-discuss] Zpool LUN Sizes

2012-10-28 Thread Fajar A. Nugraha
On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha

 So my
 suggestion is actually just present one huge 25TB LUN to zfs and let
 the SAN handle redundancy.

 create a bunch of 1-disk volumes and let ZFS handle them as if they're JBOD.

Last time I use IBM's enterprise storage (which was, admittedly, a
long time ago) you can't even do that. And looking at Morris' mail
address, it should be revelant :)

... or probably it's just me who haven't found how to do that. Which
why I suggested just use whatever the SAN can present :)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool LUN Sizes

2012-10-28 Thread Gary Mills
On Sun, Oct 28, 2012 at 04:43:34PM +0700, Fajar A. Nugraha wrote:
 On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
 (opensolarisisdeadlongliveopensolaris)
 opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
 
  So my
  suggestion is actually just present one huge 25TB LUN to zfs and let
  the SAN handle redundancy.
 
  create a bunch of 1-disk volumes and let ZFS handle them as if they're JBOD.
 
 Last time I use IBM's enterprise storage (which was, admittedly, a
 long time ago) you can't even do that. And looking at Morris' mail
 address, it should be revelant :)
 
 ... or probably it's just me who haven't found how to do that. Which
 why I suggested just use whatever the SAN can present :)

You are entering the uncharted waters of ``multi-level disk
management'' here.  Both ZFS and the SAN use redundancy and error-
checking to ensure data integrity.  Both of them also do automatic
replacement of failing disks.  A good SAN will present LUNs that
behave as perfectly reliable virtual disks, guaranteed to be error
free.  Almost all of the time, ZFS will find no errors.  If ZFS does
find an error, there's no nice way to recover.  Most commonly, this
happens when the SAN is powered down or rebooted while the ZFS host
is still running.

-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub and checksum permutations

2012-10-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
 
 I tend to agree that parity calculations likely
 are faster (even if not all parities are simple XORs - that would
 be silly for double- or triple-parity sets which may use different
 algos just to be sure).

Even though parity calculation is faster than fletcher, which is faster than 
sha256, it's all irrelevant, except in the hugest of file servers.  Go write to 
disk or read from disk as fast as you can, and see how much CPU you use.  Even 
on moderate fileservers that I've done this on (a dozen disks in parallel) the 
cpu load is negligible.  

If you ever get up to a scale where the cpu load becomes significant, you solve 
it by adding more cpu's.  There is a limit somewhere, but it's huge.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss