[zfs-discuss] ZFS disable startup import

2009-03-02 Thread smart trams

Hi All,

   
   What I all want is a way to disable startup import process of ZFS. So on 
every server reboot, I want to manually import the pools and mount on required 
mount point.
   zpool attributes like mountpoint=legacy or canmount affect pool mounting 
behavior and no command found for disabling startup import process. My systems 
are Solaris running on SPARC systems.

   Why I need this feature? Good Question! I've a active/standby clustered 
environment with 1 shared SAN disk with 2 servers. Shared disk have one ZFS 
pool [xpool] that must be always imported and mounted on one server on any 
time. When the active server dies, my cluster software [Verites Cluster] 
detects the problem and imports the 'xpool' [with -f switch] on standby server 
and starts the applications.
   Everything is happy till now. When the died server boots up, it tries to 
have the 'xpool' pool and lists it as one of it's pools. Note that I didn't 
mentioned about mounting in any mountpoint! only listing as it's current pools. 
The problem now rise up that two nodes are now trying to have write activities 
on pool and the pool gets inconsistent! What I want is to disable this ZFS 
behaviour and force it to wait until my cluster software decides about the 
active server.

Thanks for your prompt reply

Regards
Smart!


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS disable startup import

2009-03-02 Thread Dave

smart trams wrote:

Hi All,

   
   What I all want is a way to disable startup import process of ZFS. So on every server reboot, I want to manually import the pools and mount on required mount point.

   zpool attributes like mountpoint=legacy or canmount affect pool mounting 
behavior and no command found for disabling startup import process. My systems 
are Solaris running on SPARC systems.

   Why I need this feature? Good Question! I've a active/standby clustered 
environment with 1 shared SAN disk with 2 servers. Shared disk have one ZFS 
pool [xpool] that must be always imported and mounted on one server on any 
time. When the active server dies, my cluster software [Verites Cluster] 
detects the problem and imports the 'xpool' [with -f switch] on standby server 
and starts the applications.
   Everything is happy till now. When the died server boots up, it tries to 
have the 'xpool' pool and lists it as one of it's pools. Note that I didn't 
mentioned about mounting in any mountpoint! only listing as it's current pools. 
The problem now rise up that two nodes are now trying to have write activities 
on pool and the pool gets inconsistent! What I want is to disable this ZFS 
behaviour and force it to wait until my cluster software decides about the 
active server.



Use the cachefile=none option whenever you import the pool on either server:

zpool import -o cachefile=none xpool

--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss