Re: Stack backtrace on `zpool export`

2015-12-30 Thread Florian Ermisch
On 29.12.15 21:14, Alan Somers wrote:
> On Tue, Dec 29, 2015 at 12:54 PM, NGie Cooper
>  wrote:
>> 
>>> On Dec 29, 2015, at 11:48, Florian Ermisch <0xf...@fsfe.org>
>>> wrote:
>>> 
>>> Hi *,
>>> 
>>> since I've upgraded my laptop from 10.2-RELEASE to 11-CURRENT
>>> (r292536, now r292755) I see this stack backtrace when a zpool
>>> is exported:
>>> 
(see https://gist.github.com/a342d0381a6b7ac8b313)
>>> 
>>> (From /var/log/messages)
>>> 
>>> First I've only seen this on the console at shutdown or reboot
>>> (after sync I think) but later found I can reproduce it by
>>> exporting a zpool. While the only pools I can trigger this
>>> without a shutdown/reboot are connected via USB(3) I still see
>>> it just before poweroff at shutdown when the system's root pool
>>> is synced.
>>> 
>>> When I try to export a zpool under heavy load (`make -C
>>> /use/src -j 4 buildworld` on a 2 core CPU w/ HT) the system
>>> locks up completely. I don't think it's related to memory
>>> pressure as I haven't seen swap being used during a buildworld
>>> (with 8 gigs of RAM).
>>> 
>>> Has anyone else seen this?
>> 
>> Please file a bug for this issue and assign it to freebsd-fs. It
>> doesn’t look familiar (there might be a lock ordering issue with
>> either zfs or vfs+zfs). Thanks! -NGie

I've filed the bug but I don't think I can change the assignee:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=205725

As mentioned in the first comment, there might be a related
issue with UFS on 10-STABLE:
https://lists.freebsd.org/pipermail/freebsd-stable/2015-December/083895.html

Regards, Florian

> 
> Adding Will@.  I think he was working on this LOR at one point. 
> -Alan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Stack backtrace on `zpool export`

2015-12-29 Thread Florian Ermisch
Hi *,

since I've upgraded my laptop from 10.2-RELEASE to 11-CURRENT (r292536, now 
r292755) I see this stack backtrace when a zpool is exported:

Dec 27 18:44:02 fuchi-cyber220 kernel: lock order reversal:
Dec 27 18:44:02 fuchi-cyber220 kernel: 1st 0xf800c7472418 zfs (zfs) @ 
/usr/src/sys/kern/vfs_mount.c:1224
Dec 27 18:44:02 fuchi-cyber220 kernel: 2nd 0xf800c73fad50 zfs_gfs (zfs_gfs) 
@ /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/gfs.c:494
Dec 27 18:44:02 fuchi-cyber220 kernel: stack backtrace:
Dec 27 18:44:02 fuchi-cyber220 kernel: #0 0x80a7d6f0 at 
witness_debugger+0x70
Dec 27 18:44:02 fuchi-cyber220 kernel: #1 0x80a7d5f1 at 
witness_checkorder+0xe71
Dec 27 18:44:02 fuchi-cyber220 kernel: #2 0x809fedcb at 
__lockmgr_args+0xd3b
Dec 27 18:44:02 fuchi-cyber220 kernel: #3 0x80ac55ec at vop_stdlock+0x3c
Dec 27 18:44:02 fuchi-cyber220 kernel: #4 0x80fbb220 at 
VOP_LOCK1_APV+0x100
Dec 27 18:44:02 fuchi-cyber220 kernel: #5 0x80ae653a at _vn_lock+0x9a
Dec 27 18:44:02 fuchi-cyber220 kernel: #6 0x8209db13 at 
gfs_file_create+0x73
Dec 27 18:44:02 fuchi-cyber220 kernel: #7 0x8209dbbd at 
gfs_dir_create+0x1d
Dec 27 18:44:02 fuchi-cyber220 kernel: #8 0x821649e7 at 
zfsctl_mknode_snapdir+0x47
Dec 27 18:44:02 fuchi-cyber220 kernel: #9 0x8209e135 at 
gfs_dir_lookup+0x185
Dec 27 18:44:02 fuchi-cyber220 kernel: #10 0x8209e61d at 
gfs_vop_lookup+0x1d
Dec 27 18:44:02 fuchi-cyber220 kernel: #11 0x82163a05 at 
zfsctl_root_lookup+0xf5
Dec 27 18:44:02 fuchi-cyber220 kernel: #12 0x821648a3 at 
zfsctl_umount_snapshots+0x83
Dec 27 18:44:02 fuchi-cyber220 kernel: #13 0x8217d5ab at zfs_umount+0x7b
Dec 27 18:44:02 fuchi-cyber220 kernel: #14 0x80acf0b0 at dounmount+0x530
Dec 27 18:44:02 fuchi-cyber220 kernel: #15 0x80aceaed at 
sys_unmount+0x35d
Dec 27 18:44:02 fuchi-cyber220 kernel: #16 0x80e6e13b at 
amd64_syscall+0x2db
Dec 27 18:44:02 fuchi-cyber220 kernel: #17 0x80e4dd8b at 
Xfast_syscall+0xfb

(From /var/log/messages)

First I've only seen this on the console at shutdown or reboot (after sync I 
think) but later found I can reproduce it by exporting a zpool.
While the only pools I can trigger this without a shutdown/reboot are connected 
via USB(3) I still see it just before poweroff at shutdown when the system's 
root pool is synced.

When I try to export a zpool under heavy load (`make -C /use/src -j 4 
buildworld` on a 2 core CPU w/ HT) the system locks up completely. I don't 
think it's related to memory pressure as I haven't seen swap being used during 
a buildworld (with 8 gigs of RAM).

Has anyone else seen this?

Regards, Florian
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Stack backtrace on `zpool export`

2015-12-29 Thread NGie Cooper

> On Dec 29, 2015, at 11:48, Florian Ermisch <0xf...@fsfe.org> wrote:
> 
> Hi *,
> 
> since I've upgraded my laptop from 10.2-RELEASE to 11-CURRENT (r292536, now 
> r292755) I see this stack backtrace when a zpool is exported:
> 
> Dec 27 18:44:02 fuchi-cyber220 kernel: lock order reversal:
> Dec 27 18:44:02 fuchi-cyber220 kernel: 1st 0xf800c7472418 zfs (zfs) @ 
> /usr/src/sys/kern/vfs_mount.c:1224
> Dec 27 18:44:02 fuchi-cyber220 kernel: 2nd 0xf800c73fad50 zfs_gfs 
> (zfs_gfs) @ /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/gfs.c:494
> Dec 27 18:44:02 fuchi-cyber220 kernel: stack backtrace:
> Dec 27 18:44:02 fuchi-cyber220 kernel: #0 0x80a7d6f0 at 
> witness_debugger+0x70
> Dec 27 18:44:02 fuchi-cyber220 kernel: #1 0x80a7d5f1 at 
> witness_checkorder+0xe71
> Dec 27 18:44:02 fuchi-cyber220 kernel: #2 0x809fedcb at 
> __lockmgr_args+0xd3b
> Dec 27 18:44:02 fuchi-cyber220 kernel: #3 0x80ac55ec at 
> vop_stdlock+0x3c
> Dec 27 18:44:02 fuchi-cyber220 kernel: #4 0x80fbb220 at 
> VOP_LOCK1_APV+0x100
> Dec 27 18:44:02 fuchi-cyber220 kernel: #5 0x80ae653a at _vn_lock+0x9a
> Dec 27 18:44:02 fuchi-cyber220 kernel: #6 0x8209db13 at 
> gfs_file_create+0x73
> Dec 27 18:44:02 fuchi-cyber220 kernel: #7 0x8209dbbd at 
> gfs_dir_create+0x1d
> Dec 27 18:44:02 fuchi-cyber220 kernel: #8 0x821649e7 at 
> zfsctl_mknode_snapdir+0x47
> Dec 27 18:44:02 fuchi-cyber220 kernel: #9 0x8209e135 at 
> gfs_dir_lookup+0x185
> Dec 27 18:44:02 fuchi-cyber220 kernel: #10 0x8209e61d at 
> gfs_vop_lookup+0x1d
> Dec 27 18:44:02 fuchi-cyber220 kernel: #11 0x82163a05 at 
> zfsctl_root_lookup+0xf5
> Dec 27 18:44:02 fuchi-cyber220 kernel: #12 0x821648a3 at 
> zfsctl_umount_snapshots+0x83
> Dec 27 18:44:02 fuchi-cyber220 kernel: #13 0x8217d5ab at 
> zfs_umount+0x7b
> Dec 27 18:44:02 fuchi-cyber220 kernel: #14 0x80acf0b0 at 
> dounmount+0x530
> Dec 27 18:44:02 fuchi-cyber220 kernel: #15 0x80aceaed at 
> sys_unmount+0x35d
> Dec 27 18:44:02 fuchi-cyber220 kernel: #16 0x80e6e13b at 
> amd64_syscall+0x2db
> Dec 27 18:44:02 fuchi-cyber220 kernel: #17 0x80e4dd8b at 
> Xfast_syscall+0xfb
> 
> (From /var/log/messages)
> 
> First I've only seen this on the console at shutdown or reboot (after sync I 
> think) but later found I can reproduce it by exporting a zpool.
> While the only pools I can trigger this without a shutdown/reboot are 
> connected via USB(3) I still see it just before poweroff at shutdown when the 
> system's root pool is synced.
> 
> When I try to export a zpool under heavy load (`make -C /use/src -j 4 
> buildworld` on a 2 core CPU w/ HT) the system locks up completely. I don't 
> think it's related to memory pressure as I haven't seen swap being used 
> during a buildworld (with 8 gigs of RAM).
> 
> Has anyone else seen this?

Please file a bug for this issue and assign it to freebsd-fs. It doesn’t look 
familiar (there might be a lock ordering issue with either zfs or vfs+zfs).
Thanks!
-NGie
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Re: Stack backtrace on `zpool export`

2015-12-29 Thread Alan Somers
On Tue, Dec 29, 2015 at 12:54 PM, NGie Cooper  wrote:
>
>> On Dec 29, 2015, at 11:48, Florian Ermisch <0xf...@fsfe.org> wrote:
>>
>> Hi *,
>>
>> since I've upgraded my laptop from 10.2-RELEASE to 11-CURRENT (r292536, now 
>> r292755) I see this stack backtrace when a zpool is exported:
>>
>> Dec 27 18:44:02 fuchi-cyber220 kernel: lock order reversal:
>> Dec 27 18:44:02 fuchi-cyber220 kernel: 1st 0xf800c7472418 zfs (zfs) @ 
>> /usr/src/sys/kern/vfs_mount.c:1224
>> Dec 27 18:44:02 fuchi-cyber220 kernel: 2nd 0xf800c73fad50 zfs_gfs 
>> (zfs_gfs) @ /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/gfs.c:494
>> Dec 27 18:44:02 fuchi-cyber220 kernel: stack backtrace:
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #0 0x80a7d6f0 at 
>> witness_debugger+0x70
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #1 0x80a7d5f1 at 
>> witness_checkorder+0xe71
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #2 0x809fedcb at 
>> __lockmgr_args+0xd3b
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #3 0x80ac55ec at 
>> vop_stdlock+0x3c
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #4 0x80fbb220 at 
>> VOP_LOCK1_APV+0x100
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #5 0x80ae653a at _vn_lock+0x9a
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #6 0x8209db13 at 
>> gfs_file_create+0x73
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #7 0x8209dbbd at 
>> gfs_dir_create+0x1d
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #8 0x821649e7 at 
>> zfsctl_mknode_snapdir+0x47
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #9 0x8209e135 at 
>> gfs_dir_lookup+0x185
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #10 0x8209e61d at 
>> gfs_vop_lookup+0x1d
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #11 0x82163a05 at 
>> zfsctl_root_lookup+0xf5
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #12 0x821648a3 at 
>> zfsctl_umount_snapshots+0x83
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #13 0x8217d5ab at 
>> zfs_umount+0x7b
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #14 0x80acf0b0 at 
>> dounmount+0x530
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #15 0x80aceaed at 
>> sys_unmount+0x35d
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #16 0x80e6e13b at 
>> amd64_syscall+0x2db
>> Dec 27 18:44:02 fuchi-cyber220 kernel: #17 0x80e4dd8b at 
>> Xfast_syscall+0xfb
>>
>> (From /var/log/messages)
>>
>> First I've only seen this on the console at shutdown or reboot (after sync I 
>> think) but later found I can reproduce it by exporting a zpool.
>> While the only pools I can trigger this without a shutdown/reboot are 
>> connected via USB(3) I still see it just before poweroff at shutdown when 
>> the system's root pool is synced.
>>
>> When I try to export a zpool under heavy load (`make -C /use/src -j 4 
>> buildworld` on a 2 core CPU w/ HT) the system locks up completely. I don't 
>> think it's related to memory pressure as I haven't seen swap being used 
>> during a buildworld (with 8 gigs of RAM).
>>
>> Has anyone else seen this?
>
> Please file a bug for this issue and assign it to freebsd-fs. It doesn’t look 
> familiar (there might be a lock ordering issue with either zfs or vfs+zfs).
> Thanks!
> -NGie

Adding Will@.  I think he was working on this LOR at one point.
-Alan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"