Re: vfs.zfs.min_auto_ashift and OpenZFS

2020-05-01 Thread Matthew Macy
OpenZFS doesn't have the same ashift optimization logic that FreeBSD
has. It's something that needs to be resolved before the code can be
integrated downstream.

-M

On Fri, May 1, 2020 at 11:04 AM Graham Perrin  wrote:
>
> In my sysctl.conf:
>
> vfs.zfs.min_auto_ashift=12
>
> – if I recall correctly, the line was written automatically when I
> installed FreeBSD-CURRENT a year or so ago.
>
> With OpenZFS enabled:
>
> root@momh167-gjp4-8570p:~ # sysctl vfs.zfs.min_auto_ashift
> sysctl: unknown oid 'vfs.zfs.min_auto_ashift'
>
> Should I have a different line in sysctl.conf, or is
> vfs.zfs.min_auto_ashift not required with OpenZFS?
>
> Hardware: HP EliteBook 8570p, circa 2013
> 
>
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: vfs.zfs.min_auto_ashift and OpenZFS

2020-05-01 Thread Steven Hartland

Looks like it should still be there if your using the in tree ZFS:
https://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c?revision=358333=markup#l144

On 01/05/2020 19:03, Graham Perrin wrote:

In my sysctl.conf:

vfs.zfs.min_auto_ashift=12

– if I recall correctly, the line was written automatically when I 
installed FreeBSD-CURRENT a year or so ago.


With OpenZFS enabled:

root@momh167-gjp4-8570p:~ # sysctl vfs.zfs.min_auto_ashift
sysctl: unknown oid 'vfs.zfs.min_auto_ashift'

Should I have a different line in sysctl.conf, or is 
vfs.zfs.min_auto_ashift not required with OpenZFS?


Hardware: HP EliteBook 8570p, circa 2013
 


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to 
"freebsd-current-unsubscr...@freebsd.org"


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


vfs.zfs.min_auto_ashift and OpenZFS

2020-05-01 Thread Graham Perrin

In my sysctl.conf:

vfs.zfs.min_auto_ashift=12

– if I recall correctly, the line was written automatically when I 
installed FreeBSD-CURRENT a year or so ago.


With OpenZFS enabled:

root@momh167-gjp4-8570p:~ # sysctl vfs.zfs.min_auto_ashift
sysctl: unknown oid 'vfs.zfs.min_auto_ashift'

Should I have a different line in sysctl.conf, or is 
vfs.zfs.min_auto_ashift not required with OpenZFS?


Hardware: HP EliteBook 8570p, circa 2013
 


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


beadm no longer able to destroy, maybe since using OpenZFS

2020-05-01 Thread Graham Perrin

root@momh167-gjp4-8570p:~ # date ; uname -v
Fri May  1 18:52:31 BST 2020
FreeBSD 13.0-CURRENT #54 r360237: Fri Apr 24 09:10:37 BST 2020 
root@momh167-gjp4-8570p:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG

root@momh167-gjp4-8570p:~ # beadm list
BE   Active Mountpoint  Space Created
Waterfox -  -    1.3G 2020-03-10 18:24
r357746h -  -  928.4M 2020-04-08 09:28
r360237b -  -  213.6M 2020-04-28 20:17
r360237c -  -   92.0G 2020-04-29 13:24
r360237d -  -  153.2M 2020-04-30 13:08
r360237e NR /  720.4M 2020-05-01 17:58
root@momh167-gjp4-8570p:~ # beadm destroy r360237b
Are you sure you want to destroy 'r360237b'?
This action cannot be undone (y/[n]): y
cannot promote 'copperbowl/ROOT/r360237d': not a cloned filesystem
root@momh167-gjp4-8570p:~ # beadm destroy r360237c
Are you sure you want to destroy 'r360237c'?
This action cannot be undone (y/[n]): y
Boot environment 'r360237c' was created from existing snapshot
Destroy '-' snapshot? (y/[n]): y
cannot destroy 'copperbowl/ROOT/r360237c': filesystem has dependent clones
use '-R' to destroy the following datasets:
copperbowl/ROOT/r360237e
copperbowl/ROOT/r360237d@2020-05-01-17:58:54
copperbowl/ROOT/r360237d@2020-05-01-17:58:33
copperbowl/ROOT/r360237d
copperbowl/ROOT/r360237b@2020-04-30-13:08:36
copperbowl/ROOT/r360237b
copperbowl/ROOT/r357746h
copperbowl/ROOT/Waterfox
root@momh167-gjp4-8570p:~ # beadm destroy r360237d
Are you sure you want to destroy 'r360237d'?
This action cannot be undone (y/[n]): y
cannot promote 'copperbowl/ROOT/r360237e': not a cloned filesystem
root@momh167-gjp4-8570p:~ # beadm list -as
BE/Dataset/Snapshot    Active Mountpoint Space 
Created


Waterfox
  copperbowl/ROOT/Waterfox -  - 314.0M 
2020-03-10 18:24
    r360237c@2020-03-20-06:19:45   -  - 1.0G 2020-03-20 
06:19


r357746h
  copperbowl/ROOT/r357746h -  - 3.4M 2020-04-08 
09:28
    r360237c@2020-04-09-17:59:32   -  - 925.0M 
2020-04-09 17:59


r360237b
  copperbowl/ROOT/r360237b -  - 202.4M 
2020-04-28 20:17
    r360237c@2020-04-29-13:24:52   -  - 11.2M 
2020-04-29 13:24
  copperbowl/ROOT/r360237b@2020-04-30-13:08:36 -  - 1.4M 2020-04-30 
13:08


r360237c
  copperbowl/ROOT/r360237c@2020-03-20-06:19:45 -  - 1.0G 2020-03-20 
06:19
  copperbowl/ROOT/r360237c@2020-04-09-17:59:32 -  - 925.0M 
2020-04-09 17:59
  copperbowl/ROOT/r360237c@2020-04-20-06:44:01 -  - 11.6G 
2020-04-20 06:44
  copperbowl/ROOT/r360237c@2020-04-29-13:24:52 -  - 11.2M 
2020-04-29 13:24
  copperbowl/ROOT/r360237c -  - 92.0G 
2020-04-29 13:24


r360237d
  copperbowl/ROOT/r360237d -  - 151.8M 
2020-04-30 13:08
    r360237b@2020-04-30-13:08:36   -  - 1.4M 2020-04-30 
13:08
  copperbowl/ROOT/r360237d@2020-05-01-17:58:33 -  - 440.0K 
2020-05-01 17:58
  copperbowl/ROOT/r360237d@2020-05-01-17:58:54 -  - 408.0K 
2020-05-01 17:58


r360237e
  copperbowl/ROOT/r360237e NR / 720.0M 
2020-05-01 17:58
    r360237d@2020-05-01-17:58:54   -  - 408.0K 
2020-05-01 17:58

root@momh167-gjp4-8570p:~ # zfs list -t snapshot
NAME USED AVAIL  
REFER  MOUNTPOINT

copperbowl/ROOT/r360237b@2020-04-30-13:08:36    1.36M -  62.3G  -
copperbowl/ROOT/r360237c@2020-03-20-06:19:45    1.02G -  59.2G  -
copperbowl/ROOT/r360237c@2020-04-09-17:59:32 925M -  60.0G  -
copperbowl/ROOT/r360237c@2020-04-20-06:44:01    11.6G -  61.7G  -
copperbowl/ROOT/r360237c@2020-04-29-13:24:52    11.2M -  62.2G  -
copperbowl/ROOT/r360237d@2020-05-01-17:58:33 440K -  62.3G  -
copperbowl/ROOT/r360237d@2020-05-01-17:58:54 408K -  62.3G  -
copperbowl/iocage/releases/12.0-RELEASE/root@jbrowsers 8K -  1.24G  -
copperbowl/poudriere/jails/head@clean    384K -  1.91G  -
copperbowl/usr/home@2020-04-28-10:42-r360237    5.89G -   172G  -
root@momh167-gjp4-8570p:~ #

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 32-bit powerpc head -r360311: lock order reversal between: "PROC (UMA zone)" and "kernelpmap (kernelpmap)": Is this expected?

2020-05-01 Thread Mark Millard



On 2020-Apr-30, at 18:30, Mark Millard  wrote:

> Using artifact.ci's head -r360311 debug-kernel materials:
> 
> https://artifact.ci.freebsd.org/snapshot/head/r360311/powerpc/powerpc/kernel*.txz
> 
> I got the following notice:
> 
> lock order reversal:
> 1st 0x1cbb680 PROC (UMA zone) @ /usr/src/sys/vm/uma_core.c:4387
> 2nd 0x113c99c kernelpmap (kernelpmap) @ 
> /usr/src/sys/powerpc/aim/mmu_oea.c:1524
> stack backtrace:
> #0 0x5d1e5c at witness_debugger+0x94
> #1 0x5d1b34 at witness_checkorder+0xb50
> #2 0x51d774 at __mtx_lock_flags+0xcc
> #3 0x90902c at moea_kextract+0x5c
> #4 0x9462ac at pmap_kextract+0x98
> #5 0x8a417c at zone_release+0xf0
> #6 0x8abc14 at bucket_drain+0x2f0
> #7 0x8ab64c at bucket_free+0x54
> #8 0x8ab8bc at bucket_cache_reclaim+0x1bc
> #9 0x8ab3c4 at zone_reclaim+0x128
> #10 0x8a7e60 at uma_reclaim+0x1d0
> #11 0x8d96ac at vm_pageout_worker+0x4d8
> #12 0x8d91c0 at vm_pageout+0x1b0
> #13 0x4f67a0 at fork_exit+0xb0
> #14 0x94892c at fork_trampoline+0xc
> 
> Is the above interesting or is it one of the
> known-safe lock order reversals that should
> be ignored?
> 
> (The notice is from something like 4.5 hours
> before I noticed it.)
> 

While running kyua to see what it might run into . . .

lock order reversal:
 1st 0x1c34800 filedesc0 (UMA zone) @ /usr/src/sys/vm/uma_core.c:4387
 2nd 0x113c99c kernelpmap (kernelpmap) @ /usr/src/sys/powerpc/aim/mmu_oea.c:1524
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x51d774 at __mtx_lock_flags+0xcc
#3 0x90902c at moea_kextract+0x5c
#4 0x9462ac at pmap_kextract+0x98
#5 0x8a417c at zone_release+0xf0
#6 0x8abc14 at bucket_drain+0x2f0
#7 0x8ab64c at bucket_free+0x54
#8 0x8ab8bc at bucket_cache_reclaim+0x1bc
#9 0x8ab3c4 at zone_reclaim+0x128
#10 0x8a7d58 at uma_reclaim+0xc8
#11 0x656d24 at vnlru_proc+0x908
#12 0x4f67a0 at fork_exit+0xb0
#13 0x94892c at fork_trampoline+0xc

witness_debugger through zone_reclaim
look the same as the prior report.
uma_reclaim has different associated
figures.

There is also:

lock order reversal:
 1st 0xfbed24 allprison (allprison) @ /usr/src/sys/kern/kern_jail.c:984
 2nd 0x10706c4 vnet_sysinit_sxlock (vnet_sysinit_sxlock) @ 
/usr/src/sys/net/vnet.c:577
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x555300 at _sx_slock_int+0xa0
#3 0x555b10 at _sx_slock+0x28
#4 0x6b7d84 at vnet_alloc+0xf4
#5 0x4fd09c at kern_jail_set+0x1868
#6 0x4fe938 at sys_jail_set+0x70
#7 0x9492fc at trap+0x748
#8 0x93d1c0 at powerpc_interrupt+0x178

And:

lock order reversal:
 1st 0x106f5d8 ifnet_sx (ifnet_sx) @ /usr/src/sys/netinet/in.c:914
 2nd 0x107071c in_control (in_control) @ /usr/src/sys/netinet/in.c:243
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x553ca4 at _sx_xlock+0x98
#3 0x6c45b8 at in_ifscrub_all+0xec
#4 0x6dbbc4 at ip_destroy+0xb0
#5 0x6b81c0 at vnet_destroy+0x154
#6 0x4fefb0 at prison_deref+0x2cc
#7 0x5007dc at prison_remove_one+0x148
#8 0x500658 at sys_jail_remove+0x2a4
#9 0x9492fc at trap+0x748
#10 0x93d1c0 at powerpc_interrupt+0x178

I also do not know about the below GEOM topology
related lock order reversals . . .

lock order reversal:
 1st 0xfbca1c GEOM topology (GEOM topology) @ /usr/src/sys/geom/eli/g_eli.c:746
 2nd 0xd49000 allproc (allproc) @ /usr/src/sys/kern/kern_fork.c:382
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x553ca4 at _sx_xlock+0x98
#3 0x4f4fb4 at fork1+0x7dc
#4 0x5041c8 at kproc_create+0xd4
#5 0xdd27c390 at g_eli_create+0x774
#6 0xdd281048 at g_eli_config+0x23fc
#7 0x499188 at g_ctl_req+0x154
#8 0x49e784 at g_run_events+0x194
#9 0x4a1580 at g_event_procbody+0x74
#10 0x4f67a0 at fork_exit+0xb0
#11 0x94892c at fork_trampoline+0xc

lock order reversal:
 1st 0xfbca1c GEOM topology (GEOM topology) @ /usr/src/sys/geom/eli/g_eli.c:746
 2nd 0xd2baccc8 filedesc structure (filedesc structure) @ 
/usr/src/sys/kern/kern_descrip.c:2064
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x555300 at _sx_slock_int+0xa0
#3 0x555b10 at _sx_slock+0x28
#4 0x4dc128 at fdinit+0xe8
#5 0x4dc638 at fdcopy+0x68
#6 0x4f52ac at fork1+0xad4
#7 0x5041c8 at kproc_create+0xd4
#8 0xdd27c390 at g_eli_create+0x774
#9 0xdd281048 at g_eli_config+0x23fc
#10 0x499188 at g_ctl_req+0x154
#11 0x49e784 at g_run_events+0x194
#12 0x4a1580 at g_event_procbody+0x74
#13 0x4f67a0 at fork_exit+0xb0
#14 0x94892c at fork_trampoline+0xc

lock order reversal:
 1st 0xfbca1c GEOM topology (GEOM topology) @ /usr/src/sys/geom/eli/g_eli.c:746
 2nd 0xd49080 proctree (proctree) @ /usr/src/sys/kern/kern_fork.c:557
stack backtrace:
#0 0x5d1e5c at witness_debugger+0x94
#1 0x5d1b34 at witness_checkorder+0xb50
#2 0x553ca4 at _sx_xlock+0x98
#3 0x4f5650 at fork1+0xe78
#4 0x5041c8 at kproc_create+0xd4
#5 0xdd27c390 at g_eli_create+0x774
#6 0xdd281048 at g_eli_config+0x23fc
#7 0x499188 at g_ctl_req+0x154
#8 0x49e784 at