Uwe,
It was also unclear to me that legacy mounts were causing your
troubles. The ZFS Admin Guide describes ZFS mounts and legacy
mounts, here:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view
Richard, I think we need some more basic troubleshooting info, such
as this mount failure.
Since nobody seems to have a clue and I didn't want to give up - neither
install from scratch - , I kept playing. Suddenly everything was back in place,
after I hit by intuition
% zfs set mountpoint=legacy home
It beats me, why and how this brought back the desired state, since I had
issued
%
Uwe Dippel wrote:
Since nobody seems to have a clue and I didn't want to give up - neither
install from scratch - , I kept playing. Suddenly everything was back in place,
after I hit by intuition
% zfs set mountpoint=legacy home
It wasn't clear to me that you wanted a legacy mount, most
[EMAIL PROTECTED]:~# zpool import
pool: new_zpool
id: 3042040702885268372
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
new_zpool ONLINE
c0t2d0s6 ONLINE
It shows that there is one filesystem available for import on one of my disks.
Here is a list
[i]I create the default storage pool during the install, but then when it
reboots, the hostname/hostid has changed so I need to re-associate the pool. I
know you're frustrated with this stuff, but once you've figured it out it
really is very powerful. :-)[/i]
If you read my contributions, I
Uuh, I just found out that I now have the new data ... whatever, here it is:
[I did have to boot to the old system, since the new install lost its new
'home']
[i]zpool status
pool: home
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
home
Now, so my humble guess, I need to know the commands
to be run in the new install to de-associate c0d0s7
from the old install and re-associate this drive with
the new install.
All this probably happened through the '-f' in 'zpool
create -f newhome c0d0s7'; which seemingly takes
precedence
Get the content of c0d1s1 to c0d0s7 ?
c0d1s1 is pool home and active; c0d0s7 is not
active.
I have not tried this particular use case, but I think this is a case for zfs
send and zfs receive. You'd create a new pool containing only c0d0s7 and
do something like this, assuming your original
[EMAIL PROTECTED]:/u01/home# zfs snapshot u01/[EMAIL PROTECTED]
[EMAIL PROTECTED]:/u01/home# zfs send u01/[EMAIL PROTECTED] | zfs receive
u02/home
One caveat here is that I could not find a way to back up the base of the zpool
u01 into the base of zpool u02. i.e.
zfs snapshot [EMAIL PROTECTED]
Andy,
my excuses, I didn't really appreciate your input in my earlier mail !
[i]I can't get to the console of a system to take it to single user, but you
might try
svcadm enable -tr filesystem/local or zfs mount -a.
[/i]
Both work properly. Half of the job done; now I have the new home mounted,
Both work properly. Half of the job done; now I have
the new home mounted, but inactive. So I can rm -Rf *
or similar there; in order to 'cp -a' the content of
the old home to the new home.
Still the other half is unresolved: How do I mount
the old home which is in no fstab (mnttab), on
11 matches
Mail list logo