> Now, so my humble guess, I need to know the commands
> to be run in the new install to de-associate c0d0s7
> from the old install and re-associate this drive with
> the new install.
> All this probably happened through the '-f' in 'zpool
> create -f newhome c0d0s7'; which seemingly takes
> precedence in comparison with the earlier mount point
> association. Makes some sense. But still, then we
> would need just another option more that permits to
> overwrite the data without changing the association.
> 
> What do I do now ? Logically, booting to the other, new,
> system won't help; since doing the same from there would
>  do just vice versa and associate the old home with the 
> new install and the new home.

Yep, that's exactly what happened.  Zpools have a concept of ownership, they 
know about the last system that had them mounted.  This is so that in a shared 
storage environment, such as a SAN, iSCSI, etc, more than one host does not 
control a volume at the same time - that would be disasterous. 

The right way to manage the associations is with the 'zpool import' (And the 
matching 'zpool export') command.  From your "new" system, if you type "zpool 
import", it should give you a list of zpools you can import.  I suspect that 
you will see two volumes there, "home" and "newhome".   "zpool import" just 
shows you the list of zpools you can import without actually importing them.  
Here's what it looks like on my system:

[EMAIL PROTECTED]:~# zpool import                                               
                                                         
  pool: new_zpool
    id: 3042040702885268372
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        new_zpool   ONLINE
          c0t2d0s6  ONLINE

It shows that there is one filesystem available for import on one of my disks.  
Here is a list of what zpools I have associated now:

[EMAIL PROTECTED]:~# zpool list                                                 
                                                         
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
u01                     354G    254K    354G     0%  ONLINE     -
u02                     354G    150K    354G     0%  ONLINE     -

Now I run the import command.  Note, I can even rename the pool when I import 
it.  so, for example, you could import your "newhome" volume as "home".   Here 
I will import the "new_zpool" as "zpool".

[EMAIL PROTECTED]:~# zpool import new_zpool zpool                               
                                                         
[EMAIL PROTECTED]:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
u01                     354G    254K    354G     0%  ONLINE     -
u02                     354G    150K    354G     0%  ONLINE     -
zpool                   354G    500K    354G     0%  ONLINE     -

I just had to learn about zpool import yesterday since I'm scripting an 
automated install of Nexenta for some of our servers - I create the default 
storage pool during the install, but then when it reboots, the hostname/hostid 
has changed so I need to re-associate the pool.  I know you're frustrated with 
this stuff, but once you've figured it out it really is very powerful. :-) 

-Andy
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to