OK, so I've got my next little adventure here to share :-)

... after reading Your posts I was very eager to give the
whole boot-zfs-without-partitions thing a new try.

My starting situation was a ZFS mirror made up, as I wrote,
of two GPT partitions, so my pool looked like:

phaedrus# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank     ONLINE       0     0     0
          ad6p4       ONLINE       0     0     0
          ad4p4       ONLINE       0     0     0

it was root-mounted and everything was seemingly working
fine, with the machine surviving several bonnie++'s,
sysbenches, and supersmacks concurrently for many
hours (cool!).

So to give it another try my plan was to detach one
partition, clear the gmirror on the UFS boot partition,
make a new pool made out of the free disk and start
the experiment over.

it looked almost like this:

zpool offline tank ad4p4
zpool detach tank ad4p4

gmirror stop gmboot (made out of ad6p2 and ad4p2)
gmirror remove gmboot ad4p2

then I had to reboot cause it wouldn't give up
on the swap partition on the zpool.

That's where the first problem began: it wouldn't boot
anymore... just because I removed a device?
In this case I was stuck at the mountroot: stage.
It wouldn't find the root filesystem on zfs.
(this happened also when physically detaching ad4).

So I booted off a recent 8-CURRENT iso DVD, and although
the mounroot stage is, iirc, at a later stage than
the loader, I smelled it could have something to do
with it and downloaded Adam's CURRENT/ZFS loader, put it in
the appropriate place on my UFS boot partition...

note:
From the CD, I had to import the pool with
zpool import -o altroot=/somewhere tank to avoid having
problems with the datasets being mounted on top
of the 8-fixit environment's /usr ...

Ok, rebooted, and whoops it would boot again in the previous
environment.

So... from there I started over with the creation of
a ZFS-bootonly situation on ad4 (with the intention
of zpool-attaching ad6 later on)

dd if=/dev/zero bs=1m of=/dev/ad4 count=200
(just to be safe, some 'whitespace'..)

zpool create esso da4

zfs snapshot -r t...@night
zfs send -R t...@night | zfs recv -d -F esso
(it did what it had to do - cool new v13 feature BTW!)

zpool export esso

dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
dd if=/boot/zfsboot of=/dev/ad4 bs=512 skip=1 seek=1024

zpool import esso

zpool set bootfs=esso esso

the mountpoints (legacy on the poolfs, esso,
and the corresponding ones) had been correctly
copied by the send -R.

Just shortly mounted esso somewhere else,
edited loader.conf and fstab, and put it back
to legacy.

shutdown -r now.

Upon boot, it would wait a while, not present
any F1/F5, and booted into the old environment
(ad6p2 boot partition and then mounted tank as root).

From there, a zfs list or zpool status just showed
the root pool (tank), but the new one (esso) was
not present.

A zpool import showed:

heidegger# zpool import
  pool: esso
    id: 865609520845688328
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

        esso        UNAVAIL  insufficient replicas
          ad4       UNAVAIL  cannot open

zpool import -f esso did not succeed, instead,
looking on the console, I found
ZFS: WARNING: could not open ad4 for writing

I repeated the steps above two more times, making sure
I had wiped everyhing off ad4 before trying... but it
would always come up with that message. The disk is OK,
the cables too, I triple-checked it. Besides, writing
to the disk with other means (such as dd or creating a new
pool) succeeded... (albeit after the usual
sysctl kern.geom.debugflags=16 ...)

well for now I think I'll stick to the GPT + UFS Root +
ZFS Root solution (I'm so happy this works seemlessly,
so this is a big THANX and not a complaint!), but I
thought I'd share the latest hickups...

I won't be getting to that machine for a few days before
restoring to the gpt-ufs-based mirror, so if someone would like
me to provide other info I'll be happy to contribute it.

Big Regards!

Lorenzo


On 01.06.2009, at 19:09, Lorenzo Perone wrote:

On 31.05.2009, at 09:18, Adam McDougall wrote:

I encountered the same symptoms today on both a 32bit and 64bit
brand new install using gptzfsboot.  It works for me when I use
a copy of loader from an 8-current box with zfs support compiled in.
I haven't looked into it much yet but it might help you.  If you
want, you can try the loader I am using from:
http://www.egr.msu.edu/~mcdouga9/loader

Thanx for posting me your loader,  I'll try with this tomorrow night!
(any hint, btw, on why the one in -STABLE seems to be
broken, or whether it has actually been fixed by now?)


_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to