On Fri, Oct 19, 2007 at 12:48:45PM -0400, Bill Sommerfeld wrote:
> a couple use-case questions:
> 
>  1) are the file contents as an opaque binary blob intended to be
> usefully mobile between systems which share access to the pool storage?

Yes.

> in other words, would it be meaningful for clustering infrastructure to
> replicate a cachefile between nodes which may import a pool, or is the
> expected usage of this feature that each node will maintain caches of
> the cluster-managed pools it has imported in the past and may import
> again in the future?

The intent is to replicate this cachefile to shared storage in some
cluster-specific way.  I'm not sure how the second scenario would work,
but it definitely would be possible.

>  2) in disaster recovery situations, I've been instructed to do
> something like:
> 
> boot -m milestone=none
> 
> ... (mount root writeable) ...
> 
> mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.old
> ...
> as a mechanism to boot without opening the pool.
> 
> I think this means that the filename of /etc/zfs/zpool.cache is turning
> into a de-facto administrative interface because there is no more-stable
> interface available to "export" a potentially-toxic pool.  

Yes, but this won't work in a ZFS root world.  Obviously the right
answer is to not panic on toxic pools, but we will need some non-trivial
infrastructure to make this operation possible in the brave new world of
ZFS boot.  I'm fine with it being a "de-facto adminstrative interface",
as long it's not a "committed interface".

> Does that mean I can check to see that I have all disks making up the
> pools I would have opened on boot using: 
> 
>       zpool import -c /etc/zfs/zpool.cache.old

Yes, you can do this.  Depending on the toxicity, this may or may not
trigger the original problem ;-)

- Eric

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock

Reply via email to