Re: [zfs-discuss] [osol-help] 1TB ZFS thin provisioned partition prevents Opensolaris from booting.

2008-05-30 Thread Hugh Saunders
On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai
[EMAIL PROTECTED] wrote:
 I think it's right. You'd have to move to a 64 bit kernel. Any reasons to 
 stick to a 32 bit
 kernel ?

My reason would be lack of 64bit hardware :(
Is this an iscsi specific limitation? or will any multi-TB pool have
problems on 32bit hardware?
If so whats the upper bound to pool size on 32bit?

-- 
Hugh Saunders
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-24 Thread Hugh Saunders
On Sat, May 24, 2008 at 3:21 AM, Richard Elling [EMAIL PROTECTED] wrote:
 Consider a case where you might use large, slow SATA drives (1 TByte,
 7,200 rpm)
 for the main storage, and a single small, fast (36 GByte, 15krpm) drive
 for the
 L2ARC.  This might provide a reasonable cost/performance trade-off.

In this case (or in any other case where a cache device is used), does
the cache improve write performance or only reads?
I presume it cannot increase write performance as the cache is
considered volatile, so the write couldn't be  committed until the
data had left the cache device?

From the ZFS admin guide [1] Using cache devices provide the greatest
performance improvement for random read-workloads of mostly static
content. I'm not sure if that means no performance increase for
writes, or just not very much?

[1]http://docs.sun.com/app/docs/doc/817-2271/gaynr?a=view

-- 
Hugh Saunders
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-24 Thread Hugh Saunders
On Sat, May 24, 2008 at 4:00 PM,  [EMAIL PROTECTED] wrote:

   cache improve write performance or only reads?

 L2ARC cache device is for reads... for write you want
   Intent Log

Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure of their purpose.

If only one significantly faster disk is available, would it make
sense to slice it and use a slice for L2ARC and a slice for ZIL? or
would that cause horrible thrashing?

-- 
Hugh Saunders
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs raidz2 configuration mistake

2008-05-21 Thread Hugh Saunders
On Wed, May 21, 2008 at 2:54 PM, Claus Guttesen [EMAIL PROTECTED] wrote:
 zpool add -f external c12t0d0p0
 zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
 that's because the fs was online)

 No, it had nothing to do with the pool being online.  It was because a
 single disk was being added to a pool with raidz2.  The error message that
 zpool would have displayed, without the -f, is something like: 'mismatched
 replication level'.

 By using the -f the files are now striping among three vdevs: the original
 raidz2, and each of the new disks.

 Aren't one supposed to be able to add more disks to an existing
 raidz(2) pool and have the data spread all disks in the pool
 automagically?

In my understanding, the raidz level applies to the vdev, not the
pool. vdevs can be added to the pool, then dynamic striping,
distributes data between them.

-- 
Hugh Saunders
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss