Re: clear old pools remains from active vdevs

2018-04-26 Thread Andriy Gapon
On 26/04/2018 18:14, Alan Somers wrote:
> On Thu, Apr 26, 2018 at 8:37 AM, Eugene Grosbein  > wrote:
> 
> 26.04.2018 14:50, Andriy Gapon wrote:
> 
> > You can try to use zdb -l to find the stale labels.
> > And then zpool labelclear to clear them.
> 
> Our "zpool labelclear" implementation destroys everything (literally).
> Have you really tried it?

> 
> 
> "zpool labelclear" won't help in this case, because you have literally no
> devices with active labels.  The problem is that the pool is still in your
> /boot/zfs/zpool.cache file.  I think that plain "zpool destroy esx" will work 
> in
> this case.


In the original message Eugene (Zheganin) mentioned that those phantom pools
confused the boot chain.  I do not think that anything in zpool.cache can do 
that.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: clear old pools remains from active vdevs

2018-04-26 Thread Andriy Gapon
On 26/04/2018 17:37, Eugene Grosbein wrote:
> 26.04.2018 14:50, Andriy Gapon wrote:
> 
>> You can try to use zdb -l to find the stale labels.
>> And then zpool labelclear to clear them.
> 
> Our "zpool labelclear" implementation destroys everything (literally).
> Have you really tried it?
> 

I have never needed it, so I have never used it.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: clear old pools remains from active vdevs

2018-04-26 Thread Alan Somers
On Thu, Apr 26, 2018 at 8:37 AM, Eugene Grosbein  wrote:

> 26.04.2018 14:50, Andriy Gapon wrote:
>
> > You can try to use zdb -l to find the stale labels.
> > And then zpool labelclear to clear them.
>
> Our "zpool labelclear" implementation destroys everything (literally).
> Have you really tried it?
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>

"zpool labelclear" won't help in this case, because you have literally no
devices with active labels.  The problem is that the pool is still in your
/boot/zfs/zpool.cache file.  I think that plain "zpool destroy esx" will
work in this case.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: clear old pools remains from active vdevs

2018-04-26 Thread Eugene Grosbein
26.04.2018 14:50, Andriy Gapon wrote:

> You can try to use zdb -l to find the stale labels.
> And then zpool labelclear to clear them.

Our "zpool labelclear" implementation destroys everything (literally).
Have you really tried it?

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: clear old pools remains from active vdevs

2018-04-26 Thread Andriy Gapon
On 26/04/2018 10:28, Eugene M. Zheganin wrote:
> Hello,
> 
> 
> I have some active vdev disk members that used to be in pool that clearly have
> not beed destroyed properly, so I'm seeing in a "zpool import" output 
> something
> like
> 
> 
> # zpool import
>    pool: zroot
>  id: 14767697319309030904
>   state: UNAVAIL
>  status: The pool was last accessed by another system.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: http://illumos.org/msg/ZFS-8000-EY
>  config:
> 
>     zroot    UNAVAIL  insufficient replicas
>   mirror-0   UNAVAIL  insufficient replicas
>     5291726022575795110  UNAVAIL  cannot open
>     2933754417879630350  UNAVAIL  cannot open
> 
>    pool: esx
>  id: 8314148521324214892
>   state: UNAVAIL
>  status: The pool was last accessed by another system.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: http://illumos.org/msg/ZFS-8000-EY
>  config:
> 
>     esx   UNAVAIL  insufficient replicas
>   mirror-0    UNAVAIL  insufficient replicas
>     10170732803757341731  UNAVAIL  cannot open
>     9207269511643803468   UNAVAIL  cannot open
> 
> 
> is there any _safe_ way to get rid of this ? I'm asking because a gptzfsboot
> loader in recent -STABLE stumbles upon this and refuses to boot the system
> (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227772). The workaround is 
> to
> use the 11.1 loader, but I'm afraid this behavior will now be the intended 
> one.

You can try to use zdb -l to find the stale labels.
And then zpool labelclear to clear them.


-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


clear old pools remains from active vdevs

2018-04-26 Thread Eugene M. Zheganin

Hello,


I have some active vdev disk members that used to be in pool that 
clearly have not beed destroyed properly, so I'm seeing in a "zpool 
import" output something like



# zpool import
   pool: zroot
 id: 14767697319309030904
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

zrootUNAVAIL  insufficient replicas
  mirror-0   UNAVAIL  insufficient replicas
5291726022575795110  UNAVAIL  cannot open
2933754417879630350  UNAVAIL  cannot open

   pool: esx
 id: 8314148521324214892
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

esx   UNAVAIL  insufficient replicas
  mirror-0UNAVAIL  insufficient replicas
10170732803757341731  UNAVAIL  cannot open
9207269511643803468   UNAVAIL  cannot open


is there any _safe_ way to get rid of this ? I'm asking because a 
gptzfsboot loader in recent -STABLE stumbles upon this and refuses to 
boot the system 
(https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227772). The 
workaround is to use the 11.1 loader, but I'm afraid this behavior will 
now be the intended one.



Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"