Hi SHOUJIN,
The usage of cachefile in HAS+ is completed and will be available in next 
release of OHAC.

Thanks
-Venku

On 02/11/09 04:12, Robert Milkowski wrote:
> Hello SHOUJIN,
> 
> Sunday, February 8, 2009, 5:36:13 AM, you wrote:
> 
> SW> I created a 2 nodes NFS cluster with ZFS+NFS. I used resource
> SW> type SUNW.HAStoragePlus for Zpool. During failover, the zpool
> SW> needs to be imported in 2nd node. 
> SW> Unfortunately, the "zpool import" is very slow (need about 2.5
> SW> minutes in my system). This caused the whole NFS service failover
> SW> very slowly. I searched the ZFS document, there is a work acround
> SW> can make the "zpool import" very fast: Set cache file for the
> SW> pool, then use "zpool import -d FILE_NAME" to import from cache
> SW> file. But I don't know how to add this kind of specail command
> SW> line to HA cluster configuration so that the agent can call it
> SW> when failover happens.  Could anybody help on this?
> 
> I don't believe that SC supports cachefile property currently - it
> would be a good RFE. You should be able to implemente id yourself via
> GDS if really needed - but first I would check what happens if you manually
> import a pool with and without cachefile and compare timing. Then last
> time I checked HAS+ does import actually twice (looks like first time
> it is checking if pool is ok to import and if it is then it imports
> again by going thru all the scanning, etc.). Then in recent nevada
> build imports are happaning in a more parallel way - so instead of
> reading labels one by one it happens in parallel - don't know if it
> has been back ported to S10.
> 

Reply via email to