Re: cannot detach vdev from zfs pool

2016-12-22 Thread Eugene M. Zheganin

Hi.

On 22.12.2016 21:26, Alan Somers wrote:

I'm not surprised to see this kind of error in a ZFS on GELI on Zvol
pool.  ZFS on Zvols has known deadlocks, even without involving GELI.
GELI only makes it worse, because it foils the recursion detection in
zvol_open.  I wouldn't bother opening a PR if I were you, because it
probably wouldn't add any new information.

Sorry it didn't meet your expectations,
-Alan


Oh, so that's why it happened.

Okay, that's pertfectly fine with me.

Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: cannot detach vdev from zfs pool

2016-12-22 Thread Alan Somers
On Thu, Dec 22, 2016 at 2:11 AM, Eugene M. Zheganin  wrote:
> Hi,
>
> Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool,
> since it's now officially unsupported. So, I needed to reslice my disk,
> hence to detach one of the disks from a mirrored pool. I issued 'zpool
> detach zroot gpt/zroot1' and my system livelocked almost immidiately, so
> I pressed reset. Now I got this:
>
> # zpool status zroot
>   pool: zroot
>  state: DEGRADED
> status: One or more devices has been taken offline by the administrator.
> Sufficient replicas exist for the pool to continue functioning in a
> degraded state.
> action: Online the device using 'zpool online' or replace the device with
> 'zpool replace'.
>   scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 2015
> config:
>
> NAME STATE READ WRITE CKSUM
> zrootDEGRADED 0 0 0
>   mirror-0   DEGRADED 0 0 0
> gpt/zroot0   ONLINE   0 0 0
> 1151243332124505229  OFFLINE  0 0 0  was
> /dev/gpt/zroot1
>
> errors: No known data errors
>
> This isn't a big deal by itself, since I was able to create second zfs
> pool and now I'm relocating my data to it, although I should say that
> this is very disturbing sequence of events, because I'm now unable to
> even delete the UNAVAIL vdev from the pool. I tried to boot from a
> FreeBSD USB stick and detach it there, but all I discovered was the fact
> that zfs subsystem locks up upon the command 'zpool detach zroot
> 1151243332124505229'. I waited for several minutes but nothing happened,
> furthermore subsequent zpool/zfs commands are hanging up too.
>
> Is this worth submitting a pr, or may be it does need additional
> investigation ? In general I intend to destroy this pool after
> relocation it, but I'm afraid someone (or even myself again) could step
> on this later. Both disks are healthy, and I don't see the complains in
> dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy
> created somewhere under 9.0 I guess.
>
> Thanks.
> Eugene.

I'm not surprised to see this kind of error in a ZFS on GELI on Zvol
pool.  ZFS on Zvols has known deadlocks, even without involving GELI.
GELI only makes it worse, because it foils the recursion detection in
zvol_open.  I wouldn't bother opening a PR if I were you, because it
probably wouldn't add any new information.

Sorry it didn't meet your expectations,
-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


cannot detach vdev from zfs pool

2016-12-22 Thread Eugene M. Zheganin
Hi,

Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool,
since it's now officially unsupported. So, I needed to reslice my disk,
hence to detach one of the disks from a mirrored pool. I issued 'zpool
detach zroot gpt/zroot1' and my system livelocked almost immidiately, so
I pressed reset. Now I got this:

# zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
  scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 2015
config:

NAME STATE READ WRITE CKSUM
zrootDEGRADED 0 0 0
  mirror-0   DEGRADED 0 0 0
gpt/zroot0   ONLINE   0 0 0
1151243332124505229  OFFLINE  0 0 0  was
/dev/gpt/zroot1

errors: No known data errors

This isn't a big deal by itself, since I was able to create second zfs
pool and now I'm relocating my data to it, although I should say that
this is very disturbing sequence of events, because I'm now unable to
even delete the UNAVAIL vdev from the pool. I tried to boot from a
FreeBSD USB stick and detach it there, but all I discovered was the fact
that zfs subsystem locks up upon the command 'zpool detach zroot
1151243332124505229'. I waited for several minutes but nothing happened,
furthermore subsequent zpool/zfs commands are hanging up too.

Is this worth submitting a pr, or may be it does need additional
investigation ? In general I intend to destroy this pool after
relocation it, but I'm afraid someone (or even myself again) could step
on this later. Both disks are healthy, and I don't see the complains in
dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy
created somewhere under 9.0 I guess.

Thanks.
Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"