on 07/12/2012 02:55 Garrett Cooper said the following:
> If I try and let it import the pool at boot it claims the pool is in a
> FAULTED state when I point mountroot to /dev/cd0 (one of gjb's
> snapshot CDs -- thanks!), run service hostid onestart, etc. If I
> export and try to reimport the pool it claims it's not available (!).
> However, if I boot, run service hostid onestart, _then_ import the
> pool, then the pool is imported properly.

This sounds messy, not sure if it has any informative value.
I think I've seen something like this after some reason ZFS import from upsteam
when my kernel and userland were out of sync.
Do you do a full boot from the livecd?  Or do you boot your kernel but then 
userland from the cd?
In any case, not sure if this is relevan to your main trouble.

> While I was mucking around with the pool trying to get the system to
> boot I set the cachefile attribute to /boot/zfs/zpool.cache before
> upgrading. In order to diagnose whether or not that was at fault, I
> set that back to none and I'm still running into the same issue.
> I'm going to try backing out your commit and rebuild my kernel in
> order to determine whether or not that's at fault.
> One other thing: both my machines have more than one ZFS-only zpool,
> and it might be probing the pools in the wrong order; one of the pools
> has bootfs set, the other doesn't, and the behavior is sort of
> resembling it not being set properly.

bootfs property should not better.  Multi-pool configurations has been tested
before the commit.

Andriy Gapon
freebsd-current@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to