Re: witness report: vmmaplk, inode

2018-06-13 Thread richard . n . procter


On Wed, 13 Jun 2018, richard.n.proc...@gmail.com wrote:
> I found this witness log on my computestick but not here.
> 
> OpenBSD 6.3-current (GENERIC.MP) #10: Mon Jun 11 14:02:36 NZST 2018
> procter@dill.internal:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
> (CVS checkout on this date, clean but for a pf patch. NZST is 12 
> hours ahead of UTC.) 

For the record, I'm told there's a patch for this reversal: 
https://marc.info/?l=openbsd-bugs&m=152802593705776&w=2

, which was in posted in response to:
https://marc.info/?l=openbsd-tech&m=152796704214156&w=2 (Steele)

, and it also helped with:
https://marc.info/?l=openbsd-tech&m=152821522023626&w=2 (Popovski)

After patching my tree and noodling for a few more hours with 
emacs, firefox and chrome, the witness report hasn't reappeared.

cheers, 
Richard. 

> 
> lock order reversal:
>  1st 0xff0009fe22f8 vmmaplk (&map->lock) @ 
>   /usr/src/sys/uvm/uvm_map.c:4433
>  2nd 0xff00691ec0a0 inode (&ip->i_lock) @ 
>   /usr/src/sys/ufs/ufs/ufs_vnops.c:1555
> lock order "&ip->i_lock"(rrwlock) -> "&map->lock"(rwlock) first seen at:
> #0  witness_checkorder+0x4b4
> #1  _rw_enter_read+0x49
> #2  uvmfault_lookup+0x8d
> #3  uvm_fault+0x72
> #4  trap+0x516
> #5  recall_trap+0x8
> #6  copyout+0x48
> #7  ffs_read+0x1f0
> #8  VOP_READ+0x49
> #9  vn_read+0xca
> #10 dofilereadv+0x21c
> #11 sys_read+0x82
> #12 syscall+0x32a
> #13 Xsyscall_untramp+0xc0
> lock order "&map->lock"(rwlock) -> "&ip->i_lock"(rrwlock) first seen at:
> #0  witness_checkorder+0x4b4
> #1  _rw_enter+0x68
> #2  _rrw_enter+0x3e
> #3  VOP_LOCK+0x3d
> #4  vn_lock+0x34
> #5  uvn_io+0x1b8
> #6  uvm_pager_put+0x109
> #7  uvn_flush+0x424
> #8  uvm_map_clean+0x3e7
> #9  syscall+0x32a
> #10 Xsyscall_untramp+0xc0
> 
> 
> OpenBSD 6.3-current (GENERIC.MP) #10: Mon Jun 11 14:02:36 NZST 2018
> procter@dill.internal:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 2056851456 (1961MB)
> avail mem = 1963495424 (1872MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 3.0 @ 0x7b37e000 (51 entries)
> bios0: vendor Intel Corp. version "SCCHTAX5.86A.0024.2016.0408.1041" date 
> 04/08/2016
> bios0: Intel Corporation STK1AW32SC
> acpi0 at bios0: rev 2
> acpi0: sleep states S0 S4 S5
> acpi0: tables DSDT FACP APIC FPDT FIDT MCFG UEFI SSDT HPET SSDT SSDT SSDT 
> LPIT BCFG PRAM BGRT CSRT WDAT
> acpi0: wakeup devices
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1440.34 MHz
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
> cpu0: 1MB 64b/line 16-way L2 cache
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
> cpu0: apic clock running at 79MHz
> cpu0: mwait min=64, max=64, C-substates=0.2.0.0.0.0.3.3, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.95 MHz
> cpu1: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
> cpu1: 1MB 64b/line 16-way L2 cache
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.96 MHz
> cpu2: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
> cpu2: 1MB 64b/line 16-way L2 cache
> cpu2: smt 0, core 2, package 0
> cpu3 at mainbus0: apid 6 (application processor)
> cpu3: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.96 MHz
> cpu3: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
> cpu3: 1MB 64b/line 16-way L2 cach

Re: witness report: vmmaplk, inode

2018-06-12 Thread Rafael Sadowski
On Wed Jun 13, 2018 at 07:47:42AM +1200, richard.n.proc...@gmail.com wrote:
> Hi, 
> 
> I found this witness log on my computestick but not here.
> 
> I was doing little at the time besides using emacs and some vanilla 
> chrome and possibly firefox. Hope it's of use.
> 
> cheers, 
> Richard. 
> 
> OpenBSD 6.3-current (GENERIC.MP) #10: Mon Jun 11 14:02:36 NZST 2018
> procter@dill.internal:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
> (CVS checkout on this date, clean but for a pf patch. NZST is 12 
> hours ahead of UTC.) 
> 
> lock order reversal:
>  1st 0xff0009fe22f8 vmmaplk (&map->lock) @ 
>   /usr/src/sys/uvm/uvm_map.c:4433
>  2nd 0xff00691ec0a0 inode (&ip->i_lock) @ 
>   /usr/src/sys/ufs/ufs/ufs_vnops.c:1555
> lock order "&ip->i_lock"(rrwlock) -> "&map->lock"(rwlock) first seen at:
> #0  witness_checkorder+0x4b4
> #1  _rw_enter_read+0x49
> #2  uvmfault_lookup+0x8d
> #3  uvm_fault+0x72
> #4  trap+0x516
> #5  recall_trap+0x8
> #6  copyout+0x48
> #7  ffs_read+0x1f0
> #8  VOP_READ+0x49
> #9  vn_read+0xca
> #10 dofilereadv+0x21c
> #11 sys_read+0x82
> #12 syscall+0x32a
> #13 Xsyscall_untramp+0xc0
> lock order "&map->lock"(rwlock) -> "&ip->i_lock"(rrwlock) first seen at:
> #0  witness_checkorder+0x4b4
> #1  _rw_enter+0x68
> #2  _rrw_enter+0x3e
> #3  VOP_LOCK+0x3d
> #4  vn_lock+0x34
> #5  uvn_io+0x1b8
> #6  uvm_pager_put+0x109
> #7  uvn_flush+0x424
> #8  uvm_map_clean+0x3e7
> #9  syscall+0x32a
> #10 Xsyscall_untramp+0xc0
> 
> 

Same here.

OpenBSD 6.3-current (GENERIC.MP) #92: Sun Jun 10 09:11:20 MDT 2018
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

lock order reversal:
 1st 0xff03ab07b188 vmmaplk (&map->lock) @ /usr/src/sys/uvm/uvm_map.c:4433
 2nd 0xff044fe231a8 inode (&ip->i_lock) @ 
/usr/src/sys/ufs/ufs/ufs_vnops.c:1555
lock order "&ip->i_lock"(rrwlock) -> "&map->lock"(rwlock) first seen at:
#0  witness_checkorder+0x4b4
#1  _rw_enter_read+0x49
#2  uvmfault_lookup+0x8d
#3  uvm_fault+0x72
#4  trap+0x516
#5  recall_trap+0x8
#6  copyout+0x48
#7  ffs_read+0x1f0
#8  VOP_READ+0x49
#9  vn_read+0xca
#10 dofilereadv+0x21c
#11 sys_read+0x82
#12 syscall+0x32a
#13 Xsyscall_untramp+0xc0
lock order "&map->lock"(rwlock) -> "&ip->i_lock"(rrwlock) first seen at:
#0  witness_checkorder+0x4b4
#1  _rw_enter+0x68
#2  _rrw_enter+0x3e
#3  VOP_LOCK+0x3d
#4  vn_lock+0x34
#5  uvn_io+0x1b8
#6  uvm_pager_put+0x109
#7  uvn_flush+0x424
#8  uvm_map_clean+0x3e7
#9  syscall+0x32a
#10 Xsyscall_untramp+0xc0




witness report: vmmaplk, inode

2018-06-12 Thread richard . n . procter
Hi, 

I found this witness log on my computestick but not here.

I was doing little at the time besides using emacs and some vanilla 
chrome and possibly firefox. Hope it's of use.

cheers, 
Richard. 

OpenBSD 6.3-current (GENERIC.MP) #10: Mon Jun 11 14:02:36 NZST 2018
procter@dill.internal:/usr/src/sys/arch/amd64/compile/GENERIC.MP

(CVS checkout on this date, clean but for a pf patch. NZST is 12 
hours ahead of UTC.) 

lock order reversal:
 1st 0xff0009fe22f8 vmmaplk (&map->lock) @ 
  /usr/src/sys/uvm/uvm_map.c:4433
 2nd 0xff00691ec0a0 inode (&ip->i_lock) @ 
  /usr/src/sys/ufs/ufs/ufs_vnops.c:1555
lock order "&ip->i_lock"(rrwlock) -> "&map->lock"(rwlock) first seen at:
#0  witness_checkorder+0x4b4
#1  _rw_enter_read+0x49
#2  uvmfault_lookup+0x8d
#3  uvm_fault+0x72
#4  trap+0x516
#5  recall_trap+0x8
#6  copyout+0x48
#7  ffs_read+0x1f0
#8  VOP_READ+0x49
#9  vn_read+0xca
#10 dofilereadv+0x21c
#11 sys_read+0x82
#12 syscall+0x32a
#13 Xsyscall_untramp+0xc0
lock order "&map->lock"(rwlock) -> "&ip->i_lock"(rrwlock) first seen at:
#0  witness_checkorder+0x4b4
#1  _rw_enter+0x68
#2  _rrw_enter+0x3e
#3  VOP_LOCK+0x3d
#4  vn_lock+0x34
#5  uvn_io+0x1b8
#6  uvm_pager_put+0x109
#7  uvn_flush+0x424
#8  uvm_map_clean+0x3e7
#9  syscall+0x32a
#10 Xsyscall_untramp+0xc0


OpenBSD 6.3-current (GENERIC.MP) #10: Mon Jun 11 14:02:36 NZST 2018
procter@dill.internal:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2056851456 (1961MB)
avail mem = 1963495424 (1872MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 3.0 @ 0x7b37e000 (51 entries)
bios0: vendor Intel Corp. version "SCCHTAX5.86A.0024.2016.0408.1041" date 
04/08/2016
bios0: Intel Corporation STK1AW32SC
acpi0 at bios0: rev 2
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP APIC FPDT FIDT MCFG UEFI SSDT HPET SSDT SSDT SSDT LPIT 
BCFG PRAM BGRT CSRT WDAT
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1440.34 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
cpu0: 1MB 64b/line 16-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 79MHz
cpu0: mwait min=64, max=64, C-substates=0.2.0.0.0.0.3.3, IBE
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.95 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
cpu1: 1MB 64b/line 16-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.96 MHz
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
cpu2: 1MB 64b/line 16-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 6 (application processor)
cpu3: Intel(R) Atom(TM) x5-Z8300 CPU @ 1.44GHz, 1439.96 MHz
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,SMEP,ERMS,SENSOR,ARAT,MELTDOWN
cpu3: 1MB 64b/line 16-way L2 cache
cpu3: smt 0, core 3, package 0
ioapic0 at mainbus0: apid 1 pa 0xfec0, version 20, 115 pins
acpimcfg0 at acpi0 addr 0xe000, bus 0-255
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 1 (RP01)
acpiprt2 at acpi0: bus -1 (RP02)
acpiprt3 at acpi0: bus -1 (RP03)
acpiprt4 at acpi0: bus -1 (RP04)
acpicpu0 at acpi0: C3(10@1000 mwait.1@0x64), C2(10@500 mwait.1@0x58), C1(1000@1 
mwait.1), PSS
acpicpu1 at acpi0: C3(10@1000 mwait.1@0x64), C2(10@500 mwait.1@0x58), C1(1000@1 
mwait.1), PSS
acpicpu2 at acpi0: C3(10@1000 mwait.1@0x64), C2(10@500 mwait.1@0x58), C1(1000@1 
mwait.1), PSS
acpicpu3 at acpi0: C3(10@1000 mwait.1@0x64), C2(10@500 mwait.1@0x58), C1(1000@1 
mwait.1), PSS
acpipwrres0 at acpi0: ID3C, resource for ISP3
acpipwrres1 at acpi0: WWPR, resource for HS03, MDM1
acpipwrres2 at acpi0: WWPR, resource for HS13, MDM1
acpipwrres3 at acpi0: WWPR, resource for SSC1, MDM3
acpipwrres4 at acpi0: WWPR, resou

Re: witness report

2018-06-03 Thread Alexander Bluhm
On Sun, Jun 03, 2018 at 10:37:30AM -0700, Philip Guenther wrote:
> @@ -1602,7 +1607,7 @@ vfs_stall(struct proc *p, int stall)
>*/
>   TAILQ_FOREACH_REVERSE(mp, &mountlist, mntlist, mnt_list) {
>   if (stall) {
> - error = vfs_busy(mp, VB_WRITE|VB_WAIT);
> + error = vfs_busy(mp, VB_WRITE|VB_WAIT|VB_DUPOK);
>   if (error) {
>   printf("%s: busy\n", mp->mnt_stat.f_mntonname);
>   allerror = error;

dounmount() does the vfs busy in the forward direction of the mnt_list.
while ((mp = TAILQ_NEXT(mp, mnt_list)) != NULL) {
error = vfs_busy(mp, VB_WRITE|VB_WAIT);
Then it unmounts all nested mount points in the reverse direction.

So I think we should remove the _REVERSE in vfs_stall().

bluhm



Re: witness report

2018-06-03 Thread Alexander Bluhm
On Sun, Jun 03, 2018 at 10:37:30AM -0700, Philip Guenther wrote:
> Diff below adds VB_DUPOK to indicate that a given vfs_busy() call is 
> expected/permitted to occur while the thread already holds another 
> filesystem busy; the caller is responsible for ensuring the filesystems 
> are locked in the correct order (or that NOWAIT is used safely as in the 
> sys_mount() case).
> 
> As part of this, this plumbs RW_DUPOK for rw_enter().
> 
> I no longer get the witness warning on hibernate with this.
> 
> ok?

OK bluhm@

> Index: sys/mount.h
> ===
> RCS file: /data/src/openbsd/src/sys/sys/mount.h,v
> retrieving revision 1.136
> diff -u -p -r1.136 mount.h
> --- sys/mount.h   8 May 2018 08:58:49 -   1.136
> +++ sys/mount.h   3 Jun 2018 17:05:28 -
> @@ -564,6 +564,7 @@ int   vfs_busy(struct mount *, int);
>  #define VB_WRITE 0x02
>  #define VB_NOWAIT0x04/* immediately fail on busy lock */
>  #define VB_WAIT  0x08/* sleep fail on busy lock */
> +#define VB_DUPOK 0x10/* permit duplicate mount busying */
>  
>  int vfs_isbusy(struct mount *);
>  int vfs_mount_foreach_vnode(struct mount *, int (*func)(struct vnode *,
> Index: sys/rwlock.h
> ===
> RCS file: /data/src/openbsd/src/sys/sys/rwlock.h,v
> retrieving revision 1.22
> diff -u -p -r1.22 rwlock.h
> --- sys/rwlock.h  12 Aug 2017 23:27:44 -  1.22
> +++ sys/rwlock.h  3 Jun 2018 17:05:32 -
> @@ -116,6 +116,7 @@ struct rwlock {
>  #define RW_SLEEPFAIL 0x0020UL /* fail if we slept for the lock */
>  #define RW_NOSLEEP   0x0040UL /* don't wait for the lock */
>  #define RW_RECURSEFAIL   0x0080UL /* Fail on recursion for RRW 
> locks. */
> +#define RW_DUPOK 0x0100UL /* Permit duplicate lock */
>  
>  /*
>   * for rw_status() and rrw_status() only: exclusive lock held by
> Index: kern/kern_rwlock.c
> ===
> RCS file: /data/src/openbsd/src/sys/kern/kern_rwlock.c,v
> retrieving revision 1.35
> diff -u -p -r1.35 kern_rwlock.c
> --- kern/kern_rwlock.c21 Mar 2018 12:28:39 -  1.35
> +++ kern/kern_rwlock.c3 Jun 2018 17:00:02 -
> @@ -223,6 +223,8 @@ _rw_enter(struct rwlock *rwl, int flags 
>   lop_flags = LOP_NEWORDER;
>   if (flags & RW_WRITE)
>   lop_flags |= LOP_EXCLUSIVE;
> + if (flags & RW_DUPOK)
> + lop_flags |= LOP_DUPOK;
>   if ((flags & RW_NOSLEEP) == 0 && (flags & RW_DOWNGRADE) == 0)
>   WITNESS_CHECKORDER(&rwl->rwl_lock_obj, lop_flags, file, line,
>   NULL);
> Index: kern/vfs_subr.c
> ===
> RCS file: /data/src/openbsd/src/sys/kern/vfs_subr.c,v
> retrieving revision 1.273
> diff -u -p -r1.273 vfs_subr.c
> --- kern/vfs_subr.c   27 May 2018 06:02:14 -  1.273
> +++ kern/vfs_subr.c   3 Jun 2018 17:04:09 -
> @@ -188,6 +188,11 @@ vfs_busy(struct mount *mp, int flags)
>   else
>   rwflags |= RW_NOSLEEP;
>  
> +#ifdef WITNESS
> + if (flags & VB_DUPOK)
> + rwflags |= RW_DUPOK;
> +#endif
> +
>   if (rw_enter(&mp->mnt_lock, rwflags))
>   return (EBUSY);
>  
> @@ -1602,7 +1607,7 @@ vfs_stall(struct proc *p, int stall)
>*/
>   TAILQ_FOREACH_REVERSE(mp, &mountlist, mntlist, mnt_list) {
>   if (stall) {
> - error = vfs_busy(mp, VB_WRITE|VB_WAIT);
> + error = vfs_busy(mp, VB_WRITE|VB_WAIT|VB_DUPOK);
>   if (error) {
>   printf("%s: busy\n", mp->mnt_stat.f_mntonname);
>   allerror = error;
> Index: kern/vfs_syscalls.c
> ===
> RCS file: /data/src/openbsd/src/sys/kern/vfs_syscalls.c,v
> retrieving revision 1.284
> diff -u -p -r1.284 vfs_syscalls.c
> --- kern/vfs_syscalls.c   2 Jun 2018 10:27:43 -   1.284
> +++ kern/vfs_syscalls.c   3 Jun 2018 17:04:55 -
> @@ -230,7 +230,7 @@ sys_mount(struct proc *p, void *v, regis
>  
>  update:
>   /* Ensure that the parent mountpoint does not get unmounted. */
> - error = vfs_busy(vp->v_mount, VB_READ|VB_NOWAIT);
> + error = vfs_busy(vp->v_mount, VB_READ|VB_NOWAIT|VB_DUPOK);
>   if (error) {
>   if (mp->mnt_flag & MNT_UPDATE) {
>   mp->mnt_flag = mntflag;
> @@ -439,7 +439,7 @@ dounmount(struct mount *mp, int flags, s
>   error = EBUSY;
>   goto err;
>   }
> - error = vfs_busy(mp, VB_WRITE|VB_WAIT);
> + error = vfs_busy(mp, VB_WRITE|VB_WAIT|VB_DUPOK);
>   if (error) {
>   if ((flags & MNT_DOOMED

Re: witness report

2018-06-03 Thread Amit Kulkarni
On Sun, 3 Jun 2018 13:09:43 -0700
Philip Guenther  wrote:

> On Sun, Jun 3, 2018 at 12:51 PM, Amit Kulkarni  wrote:
> 
> > On Sun, 3 Jun 2018 10:37:30 -0700
> > Philip Guenther  wrote:
> >
> ...
> 
> > > Index: kern/kern_rwlock.c
> > > ===
> > > RCS file: /data/src/openbsd/src/sys/kern/kern_rwlock.c,v
> > > retrieving revision 1.35
> > > diff -u -p -r1.35 kern_rwlock.c
> > > --- kern/kern_rwlock.c21 Mar 2018 12:28:39 -  1.35
> > > +++ kern/kern_rwlock.c3 Jun 2018 17:00:02 -
> > > @@ -223,6 +223,8 @@ _rw_enter(struct rwlock *rwl, int flags
> > >   lop_flags = LOP_NEWORDER;
> > >   if (flags & RW_WRITE)
> > >   lop_flags |= LOP_EXCLUSIVE;
> > > + if (flags & RW_DUPOK)
> > > + lop_flags |= LOP_DUPOK;
> > >   if ((flags & RW_NOSLEEP) == 0 && (flags & RW_DOWNGRADE) == 0)
> > >   WITNESS_CHECKORDER(&rwl->rwl_lock_obj, lop_flags, file,
> > line,
> > >   NULL);
> > > Index: kern/vfs_subr.c
> > > ===
> > > RCS file: /data/src/openbsd/src/sys/kern/vfs_subr.c,v
> > > retrieving revision 1.273
> > > diff -u -p -r1.273 vfs_subr.c
> > > --- kern/vfs_subr.c   27 May 2018 06:02:14 -  1.273
> > > +++ kern/vfs_subr.c   3 Jun 2018 17:04:09 -
> > > @@ -188,6 +188,11 @@ vfs_busy(struct mount *mp, int flags)
> > >   else
> > >   rwflags |= RW_NOSLEEP;
> > >
> > > +#ifdef WITNESS
> > > + if (flags & VB_DUPOK)
> > > + rwflags |= RW_DUPOK;
> > > +#endif
> > > +
> >
> > The other parts where you added the dup are not checking for Witness. This
> > part above should be for all kernels, right? Witness or non-witness.
> >
> 
> No, the other code-generating additions, in kern_rwlock.c, are also inside
> #ifdef WITNESS, just outside of the context of the diff.  The RW_DUPOK flag
> has no effect if it's not a WITNESS kernel so excluding those lines is
> intentional.

My apologies, and sorry for the noise!



Re: witness report

2018-06-03 Thread Philip Guenther
On Sun, Jun 3, 2018 at 12:51 PM, Amit Kulkarni  wrote:

> On Sun, 3 Jun 2018 10:37:30 -0700
> Philip Guenther  wrote:
>
...

> > Index: kern/kern_rwlock.c
> > ===
> > RCS file: /data/src/openbsd/src/sys/kern/kern_rwlock.c,v
> > retrieving revision 1.35
> > diff -u -p -r1.35 kern_rwlock.c
> > --- kern/kern_rwlock.c21 Mar 2018 12:28:39 -  1.35
> > +++ kern/kern_rwlock.c3 Jun 2018 17:00:02 -
> > @@ -223,6 +223,8 @@ _rw_enter(struct rwlock *rwl, int flags
> >   lop_flags = LOP_NEWORDER;
> >   if (flags & RW_WRITE)
> >   lop_flags |= LOP_EXCLUSIVE;
> > + if (flags & RW_DUPOK)
> > + lop_flags |= LOP_DUPOK;
> >   if ((flags & RW_NOSLEEP) == 0 && (flags & RW_DOWNGRADE) == 0)
> >   WITNESS_CHECKORDER(&rwl->rwl_lock_obj, lop_flags, file,
> line,
> >   NULL);
> > Index: kern/vfs_subr.c
> > ===
> > RCS file: /data/src/openbsd/src/sys/kern/vfs_subr.c,v
> > retrieving revision 1.273
> > diff -u -p -r1.273 vfs_subr.c
> > --- kern/vfs_subr.c   27 May 2018 06:02:14 -  1.273
> > +++ kern/vfs_subr.c   3 Jun 2018 17:04:09 -
> > @@ -188,6 +188,11 @@ vfs_busy(struct mount *mp, int flags)
> >   else
> >   rwflags |= RW_NOSLEEP;
> >
> > +#ifdef WITNESS
> > + if (flags & VB_DUPOK)
> > + rwflags |= RW_DUPOK;
> > +#endif
> > +
>
> The other parts where you added the dup are not checking for Witness. This
> part above should be for all kernels, right? Witness or non-witness.
>

No, the other code-generating additions, in kern_rwlock.c, are also inside
#ifdef WITNESS, just outside of the context of the diff.  The RW_DUPOK flag
has no effect if it's not a WITNESS kernel so excluding those lines is
intentional.


Philip Guenther


Re: witness report

2018-06-03 Thread Amit Kulkarni
On Sun, 3 Jun 2018 10:37:30 -0700
Philip Guenther  wrote:

> On Sun, 3 Jun 2018, Theo de Raadt wrote:
> > Philip Guenther  wrote:
> > > The warning is not that a single filesystem is being locked 
> > > recursively by a single thread, but just that a single thread is 
> > > holding locks on multiple filesystems.
> > 
> > vfs_stall() needs to grab locks on all filesystems, to stop a variety of 
> > filesystem transactions.  (Other types of transactions are blocked in 
> > other ways).
> > 
> > sys_umount() grabs locks on all filesystems above it, to stop anyone 
> > from doing a parallel mount/unmount along the same path.
> > 
> > This is all normal.
> 
> Diff below adds VB_DUPOK to indicate that a given vfs_busy() call is 
> expected/permitted to occur while the thread already holds another 
> filesystem busy; the caller is responsible for ensuring the filesystems 
> are locked in the correct order (or that NOWAIT is used safely as in the 
> sys_mount() case).
> 
> As part of this, this plumbs RW_DUPOK for rw_enter().
> 
> I no longer get the witness warning on hibernate with this.
> 
> ok?
> 
> Philip Guenther
> 
> 
> Index: sys/mount.h
> ===
> RCS file: /data/src/openbsd/src/sys/sys/mount.h,v
> retrieving revision 1.136
> diff -u -p -r1.136 mount.h
> --- sys/mount.h   8 May 2018 08:58:49 -   1.136
> +++ sys/mount.h   3 Jun 2018 17:05:28 -
> @@ -564,6 +564,7 @@ int   vfs_busy(struct mount *, int);
>  #define VB_WRITE 0x02
>  #define VB_NOWAIT0x04/* immediately fail on busy lock */
>  #define VB_WAIT  0x08/* sleep fail on busy lock */
> +#define VB_DUPOK 0x10/* permit duplicate mount busying */
>  
>  int vfs_isbusy(struct mount *);
>  int vfs_mount_foreach_vnode(struct mount *, int (*func)(struct vnode *,
> Index: sys/rwlock.h
> ===
> RCS file: /data/src/openbsd/src/sys/sys/rwlock.h,v
> retrieving revision 1.22
> diff -u -p -r1.22 rwlock.h
> --- sys/rwlock.h  12 Aug 2017 23:27:44 -  1.22
> +++ sys/rwlock.h  3 Jun 2018 17:05:32 -
> @@ -116,6 +116,7 @@ struct rwlock {
>  #define RW_SLEEPFAIL 0x0020UL /* fail if we slept for the lock */
>  #define RW_NOSLEEP   0x0040UL /* don't wait for the lock */
>  #define RW_RECURSEFAIL   0x0080UL /* Fail on recursion for RRW 
> locks. */
> +#define RW_DUPOK 0x0100UL /* Permit duplicate lock */
>  
>  /*
>   * for rw_status() and rrw_status() only: exclusive lock held by
> Index: kern/kern_rwlock.c
> ===
> RCS file: /data/src/openbsd/src/sys/kern/kern_rwlock.c,v
> retrieving revision 1.35
> diff -u -p -r1.35 kern_rwlock.c
> --- kern/kern_rwlock.c21 Mar 2018 12:28:39 -  1.35
> +++ kern/kern_rwlock.c3 Jun 2018 17:00:02 -
> @@ -223,6 +223,8 @@ _rw_enter(struct rwlock *rwl, int flags 
>   lop_flags = LOP_NEWORDER;
>   if (flags & RW_WRITE)
>   lop_flags |= LOP_EXCLUSIVE;
> + if (flags & RW_DUPOK)
> + lop_flags |= LOP_DUPOK;
>   if ((flags & RW_NOSLEEP) == 0 && (flags & RW_DOWNGRADE) == 0)
>   WITNESS_CHECKORDER(&rwl->rwl_lock_obj, lop_flags, file, line,
>   NULL);
> Index: kern/vfs_subr.c
> ===
> RCS file: /data/src/openbsd/src/sys/kern/vfs_subr.c,v
> retrieving revision 1.273
> diff -u -p -r1.273 vfs_subr.c
> --- kern/vfs_subr.c   27 May 2018 06:02:14 -  1.273
> +++ kern/vfs_subr.c   3 Jun 2018 17:04:09 -
> @@ -188,6 +188,11 @@ vfs_busy(struct mount *mp, int flags)
>   else
>   rwflags |= RW_NOSLEEP;
>  
> +#ifdef WITNESS
> + if (flags & VB_DUPOK)
> + rwflags |= RW_DUPOK;
> +#endif
> +

The other parts where you added the dup are not checking for Witness. This part 
above should be for all kernels, right? Witness or non-witness.



Re: witness report

2018-06-03 Thread Philip Guenther
On Sun, 3 Jun 2018, Theo de Raadt wrote:
> Philip Guenther  wrote:
> > The warning is not that a single filesystem is being locked 
> > recursively by a single thread, but just that a single thread is 
> > holding locks on multiple filesystems.
> 
> vfs_stall() needs to grab locks on all filesystems, to stop a variety of 
> filesystem transactions.  (Other types of transactions are blocked in 
> other ways).
> 
> sys_umount() grabs locks on all filesystems above it, to stop anyone 
> from doing a parallel mount/unmount along the same path.
> 
> This is all normal.

Diff below adds VB_DUPOK to indicate that a given vfs_busy() call is 
expected/permitted to occur while the thread already holds another 
filesystem busy; the caller is responsible for ensuring the filesystems 
are locked in the correct order (or that NOWAIT is used safely as in the 
sys_mount() case).

As part of this, this plumbs RW_DUPOK for rw_enter().

I no longer get the witness warning on hibernate with this.

ok?

Philip Guenther


Index: sys/mount.h
===
RCS file: /data/src/openbsd/src/sys/sys/mount.h,v
retrieving revision 1.136
diff -u -p -r1.136 mount.h
--- sys/mount.h 8 May 2018 08:58:49 -   1.136
+++ sys/mount.h 3 Jun 2018 17:05:28 -
@@ -564,6 +564,7 @@ int vfs_busy(struct mount *, int);
 #define VB_WRITE   0x02
 #define VB_NOWAIT  0x04/* immediately fail on busy lock */
 #define VB_WAIT0x08/* sleep fail on busy lock */
+#define VB_DUPOK   0x10/* permit duplicate mount busying */
 
 int vfs_isbusy(struct mount *);
 int vfs_mount_foreach_vnode(struct mount *, int (*func)(struct vnode *,
Index: sys/rwlock.h
===
RCS file: /data/src/openbsd/src/sys/sys/rwlock.h,v
retrieving revision 1.22
diff -u -p -r1.22 rwlock.h
--- sys/rwlock.h12 Aug 2017 23:27:44 -  1.22
+++ sys/rwlock.h3 Jun 2018 17:05:32 -
@@ -116,6 +116,7 @@ struct rwlock {
 #define RW_SLEEPFAIL   0x0020UL /* fail if we slept for the lock */
 #define RW_NOSLEEP 0x0040UL /* don't wait for the lock */
 #define RW_RECURSEFAIL 0x0080UL /* Fail on recursion for RRW locks. */
+#define RW_DUPOK   0x0100UL /* Permit duplicate lock */
 
 /*
  * for rw_status() and rrw_status() only: exclusive lock held by
Index: kern/kern_rwlock.c
===
RCS file: /data/src/openbsd/src/sys/kern/kern_rwlock.c,v
retrieving revision 1.35
diff -u -p -r1.35 kern_rwlock.c
--- kern/kern_rwlock.c  21 Mar 2018 12:28:39 -  1.35
+++ kern/kern_rwlock.c  3 Jun 2018 17:00:02 -
@@ -223,6 +223,8 @@ _rw_enter(struct rwlock *rwl, int flags 
lop_flags = LOP_NEWORDER;
if (flags & RW_WRITE)
lop_flags |= LOP_EXCLUSIVE;
+   if (flags & RW_DUPOK)
+   lop_flags |= LOP_DUPOK;
if ((flags & RW_NOSLEEP) == 0 && (flags & RW_DOWNGRADE) == 0)
WITNESS_CHECKORDER(&rwl->rwl_lock_obj, lop_flags, file, line,
NULL);
Index: kern/vfs_subr.c
===
RCS file: /data/src/openbsd/src/sys/kern/vfs_subr.c,v
retrieving revision 1.273
diff -u -p -r1.273 vfs_subr.c
--- kern/vfs_subr.c 27 May 2018 06:02:14 -  1.273
+++ kern/vfs_subr.c 3 Jun 2018 17:04:09 -
@@ -188,6 +188,11 @@ vfs_busy(struct mount *mp, int flags)
else
rwflags |= RW_NOSLEEP;
 
+#ifdef WITNESS
+   if (flags & VB_DUPOK)
+   rwflags |= RW_DUPOK;
+#endif
+
if (rw_enter(&mp->mnt_lock, rwflags))
return (EBUSY);
 
@@ -1602,7 +1607,7 @@ vfs_stall(struct proc *p, int stall)
 */
TAILQ_FOREACH_REVERSE(mp, &mountlist, mntlist, mnt_list) {
if (stall) {
-   error = vfs_busy(mp, VB_WRITE|VB_WAIT);
+   error = vfs_busy(mp, VB_WRITE|VB_WAIT|VB_DUPOK);
if (error) {
printf("%s: busy\n", mp->mnt_stat.f_mntonname);
allerror = error;
Index: kern/vfs_syscalls.c
===
RCS file: /data/src/openbsd/src/sys/kern/vfs_syscalls.c,v
retrieving revision 1.284
diff -u -p -r1.284 vfs_syscalls.c
--- kern/vfs_syscalls.c 2 Jun 2018 10:27:43 -   1.284
+++ kern/vfs_syscalls.c 3 Jun 2018 17:04:55 -
@@ -230,7 +230,7 @@ sys_mount(struct proc *p, void *v, regis
 
 update:
/* Ensure that the parent mountpoint does not get unmounted. */
-   error = vfs_busy(vp->v_mount, VB_READ|VB_NOWAIT);
+   error = vfs_busy(vp->v_mount, VB_READ|VB_NOWAIT|VB_DUPOK);
if (error) {
if (mp->mnt_flag & MNT_UPDATE) {
mp->mnt_flag = mntflag;
@@ -439,7 +439,7 @@ dounmount(struct mount *mp, int flags, s
   

Re: witness report

2018-06-03 Thread Theo de Raadt
Philip Guenther  wrote:

> The warning is not that a single filesystem is being locked recursively by a 
> single
> thread, but just that a single thread is holding locks on multiple 
> filesystems.

vfs_stall() needs to grab locks on all filesystems, to stop a variety of
filesystem transactions.  (Other types of transactions are blocked in other
ways).

sys_umount() grabs locks on all filesystems above it, to stop anyone from
doing a parallel mount/unmount along the same path.

This is all normal.



Re: witness report

2018-06-03 Thread Philip Guenther
On Sun, Jun 3, 2018 at 9:08 AM, Theo de Raadt  wrote:

> Philip Guenther  wrote:
>
> > On Sun, Jun 3, 2018 at 3:26 AM, Klemens Nanni  wrote:
> >
> > > Snap from 31.05.2018 with
> > >
> > > OpenBSD 6.3-current (GENERIC.MP) #0: Sat Jun  2 16:21:22 CEST 2018
> > > k...@x250.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > >
> > >
> > > acquiring duplicate lock of same type: "&mp->mnt_lock"
> > > 1st vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> > > 2nd vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> > > Starting stack trace...
> > > witness_checkorder(9,81ab15c8,bf,80d00040,21) at
> > > witness_checkorder+0x63d
> > > _rw_enter(0,1,0,80d0) at _rw_enter+0x56
> > > vfs_stall(1,80025400) at vfs_stall+0xab
> > >
> >
> > [Also reported by bluhm@ and others]
> >
> > Is vfs_stall() the only place that locks (busies) multiple mounts?
>
> Isn't that also how unmount works?  I believe it locks the mount it will
> be discarding, and the filesystem it is mounted upon.  It is serializing
> against other activities, such as a new mount or another umount occuring
> against the same dir.




Hmm, yes, dounmount() can busy multiple filesystems concurrently when doing
a forced unmount of a filesystem which has a filesystem below it, but in
the normal case of unmounting a leaf filesystem I don't see where it would
have multiple filesystems busy concurrently.


I see vfs_stall() doing 1 lock per filesystem.  Why does this report
> say there are duplicates?
>

 The warning is not that a single filesystem is being locked recursively by
a single thread, but just that a single thread is holding locks on multiple
filesystems.


Philip


Re: witness report

2018-06-03 Thread Theo de Raadt
Philip Guenther  wrote:

> On Sun, Jun 3, 2018 at 3:26 AM, Klemens Nanni  wrote:
> 
> > Snap from 31.05.2018 with
> >
> > OpenBSD 6.3-current (GENERIC.MP) #0: Sat Jun  2 16:21:22 CEST 2018
> > k...@x250.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> >
> >
> > acquiring duplicate lock of same type: "&mp->mnt_lock"
> > 1st vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> > 2nd vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> > Starting stack trace...
> > witness_checkorder(9,81ab15c8,bf,80d00040,21) at
> > witness_checkorder+0x63d
> > _rw_enter(0,1,0,80d0) at _rw_enter+0x56
> > vfs_stall(1,80025400) at vfs_stall+0xab
> >
> 
> [Also reported by bluhm@ and others]
> 
> Is vfs_stall() the only place that locks (busies) multiple mounts?

Isn't that also how unmount works?  I believe it locks the mount it will
be discarding, and the filesystem it is mounted upon.  It is serializing
against other activities, such as a new mount or another umount occuring
against the same dir.

I see vfs_stall() doing 1 lock per filesystem.  Why does this report
say there are duplicates?



Re: witness report

2018-06-03 Thread Philip Guenther
On Sun, Jun 3, 2018 at 3:26 AM, Klemens Nanni  wrote:

> Snap from 31.05.2018 with
>
> OpenBSD 6.3-current (GENERIC.MP) #0: Sat Jun  2 16:21:22 CEST 2018
> k...@x250.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>
>
> acquiring duplicate lock of same type: "&mp->mnt_lock"
> 1st vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> 2nd vfslock @ /usr/src/sys/kern/vfs_subr.c:191
> Starting stack trace...
> witness_checkorder(9,81ab15c8,bf,80d00040,21) at
> witness_checkorder+0x63d
> _rw_enter(0,1,0,80d0) at _rw_enter+0x56
> vfs_stall(1,80025400) at vfs_stall+0xab
>

[Also reported by bluhm@ and others]

Is vfs_stall() the only place that locks (busies) multiple mounts?  If it's
the only place _and_ it hotlds no other locks when doing that then I think
it actually is safe, because it would be equivalent to multiple threads
each holding only a single mount lock and nothing else.

Even if we agree that evaluation is correct, I don't think we want to mark
mnt_lock as DUPOK for all purposes, but rather just pass LOP_DUPOK through
for the calls from vfs_stall().


Philip Guenther


witness report

2018-06-03 Thread Klemens Nanni
Snap from 31.05.2018 with

OpenBSD 6.3-current (GENERIC.MP) #0: Sat Jun  2 16:21:22 CEST 2018
k...@x250.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP


acquiring duplicate lock of same type: "&mp->mnt_lock"
1st vfslock @ /usr/src/sys/kern/vfs_subr.c:191
2nd vfslock @ /usr/src/sys/kern/vfs_subr.c:191
Starting stack trace...
witness_checkorder(9,81ab15c8,bf,80d00040,21) at 
witness_checkorder+0x63d
_rw_enter(0,1,0,80d0) at _rw_enter+0x56
vfs_stall(1,80025400) at vfs_stall+0xab
acpi_sleep_state(80025400,80025400) at acpi_sleep_state+0x1a7
acpi_sleep_task(80025400,8002a2e0) at acpi_sleep_task+0x10
acpi_thread(0) at acpi_thread+0x1b8
end trace frame: 0x0, count: 251
End of stack trace.



witness report

2018-06-02 Thread Bryan Steele
Happens while booting my AMD desktop w/ radeon.

lock order reversal:
 1st 0xff01d4b2be30 vmmaplk (&map->lock) @
/home/brynet/Projects/current/src/sys/uvm/uvm_map.c:4433
 2nd 0xff01d415a0a0 inode (&ip->i_lock) @
/home/brynet/Projects/current/src/sys/ufs/ufs/ufs_vnops.c:1559
lock order "&ip->i_lock"(rrwlock) -> "&map->lock"(rwlock) first seen at:
#0  witness_checkorder+0x494
#1  _rw_enter+0x56
#2  vm_map_lock_ln+0xac
#3  uvm_map+0x191
#4  km_alloc+0x15a
#5  pool_multi_alloc_ni+0xae
#6  pool_p_alloc+0x49
#7  pool_do_get+0xd7
#8  pool_get+0xa2
#9  ufsdirhash_build+0x311
#10 ufs_lookup+0x18a
#11 VOP_LOOKUP+0x42
#12 vfs_lookup+0x27e
#13 namei+0x205
#14 loadfirmware+0xef
#15 r600_init_microcode+0x382
#16 r600_init+0x4d5
#17 radeon_device_init+0xa90
#18 radeondrm_attachhook+0x34
lock order "&map->lock"(rwlock) -> "&ip->i_lock"(rrwlock) first seen at:
#0  witness_checkorder+0x494
#1  _rw_enter+0x56
#2  _rrw_enter+0x32
#3  VOP_LOCK+0x30
#4  vn_lock+0x24
#5  uvn_io+0x1a5
#6  uvm_pager_put+0xf9
#7  uvn_flush+0x414
#8  uvm_map_clean+0x3c7
#9  syscall+0x31d
#10 Xsyscall_untramp+0xc0