Hello Christine,

Saturday, December 16, 2006, 12:17:12 AM, you wrote:

CT> Hi,

CT> I guess we are acquainted with the ZFS Wikipedia? 
CT> http://en.wikipedia.org/wiki/ZFS

CT> Customers refer to it, I wonder where the Wiki gets its numbers.  For 
CT> example there's a Sun marketing slide that says "unlimited snapshots" 
CT> contradicted by the the first bullet:

CT> 2^48 — Number of snapshots in any file system (2 × 10^14)

CT> Is this correct?

I belive it is.
However 2^48 limit per file system is for every practical purpose
virtually no limit, right? So when it comes to marketing I guess they
get it right.


CT> I want to make a mirrored pool but after creation detach one side of the
CT> mirror and run like this indefinitely.  Can I do this?  I have yet to 
CT> try it out, just wondering if someone knows the answer right off the bat.

Of course you can.
However you won't be able to "mount" detached submirror.
I guess it should be quite easy to implement if needed (yes, it's
needed).


CT> Why can't I have a vdev of type concat?  A customer says he has hardware
CT> RAID-0 and does not want to stripe over stripe.  Are there known 
CT> problems with stripes on stripes, unintentional hotspots maybe?

Depends how those stripes on HW array are actually configured and
depends on data access pattern. Once you put enough data that more
LUNs are used then even with concatenation you'll get into trouble
(assuming pathological situation already). But generally if config is
done right stripe on stripe shouldn't be a problem at all (rather
should give you more performance).

By pathological situation I assume config like creating a stripe from
some hypers (partitions) each on differen disk, then making another
stripe from other hypers on the same disks and then striping between
such LUNs on a host.


CT> Data checksum is used only to detect bad data, and not to recover data,
CT> correct?  Just verifying.

Depends. If you do only striping in ZFS then you're right (not
counting ditto blocks for meta data - however it can be useful).
Once you create redundant pool in ZFS then you're not only able to
detect bad data but also to correct them on the fly. That's what ZFS
does automatically for you.


CT> Can I use ZFS specifically with PowerPath?  I remember George W. 
CT> mentioning that PowerPath tries to do something clever and it interfered
CT> with ZFS but I can't remember the specifics.

In theory it should work. However someone else reported a problem with
such a combination and IIRC it was about not exporting all LUN details
by PP rather than doing anything clever by PP itself. I hope it was
fixed by EMC and if not perhaps it's a good idea to fill in BUG in EMC
(maybe I should test it and do it my self - just need some time).

CT> Will I be able to I tune the DMU "flush" rate, now set at 5 seconds?

echo 'txg_time/D 0t1' | mdb -kw

Will change it to every 1s.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to