Re: FreeBSD 13.0-RC5 Now Available

2021-04-04 Thread Glen Barber
Is it necessary to quote the *entire* email (including checksums)?

Glen
Sent from my phone.
Please excuse my brevity and/or typos.

> On Apr 4, 2021, at 4:50 PM, Alan Somers  wrote:
> 
> On Sat, Apr 3, 2021 at 9:34 AM Glen Barber  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>> 
>> The fifth RC build of the 13.0-RELEASE release cycle is now available.
>> 
>> Installation images are available for:
>> 
>> o 13.0-RC5 amd64 GENERIC
>> o 13.0-RC5 i386 GENERIC
>> o 13.0-RC5 powerpc GENERIC
>> o 13.0-RC5 powerpc64 GENERIC64
>> o 13.0-RC5 powerpc64le GENERIC64LE
>> o 13.0-RC5 powerpcspe MPC85XXSPE
>> o 13.0-RC5 armv6 RPI-B
>> o 13.0-RC5 armv7 GENERICSD
>> o 13.0-RC5 aarch64 GENERIC
>> o 13.0-RC5 aarch64 RPI
>> o 13.0-RC5 aarch64 PINE64
>> o 13.0-RC5 aarch64 PINE64-LTS
>> o 13.0-RC5 aarch64 PINEBOOK
>> o 13.0-RC5 aarch64 ROCK64
>> o 13.0-RC5 aarch64 ROCKPRO64
>> o 13.0-RC5 riscv64 GENERIC
>> o 13.0-RC5 riscv64 GENERICSD
>> 
>> Note regarding arm SD card images: For convenience for those without
>> console access to the system, a freebsd user with a password of
>> freebsd is available by default for ssh(1) access.  Additionally,
>> the root user password is set to root.  It is strongly recommended
>> to change the password for both users after gaining access to the
>> system.
>> 
>> Installer images and memory stick images are available here:
>> 
>> https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.0/
>> 
>> The image checksums follow at the end of this e-mail.
>> 
>> If you notice problems you can report them through the Bugzilla PR
>> system or on the -stable mailing list.
>> 
>> If you would like to use Git to do a source based update of an existing
>> system, use the "releng/13.0" branch.
>> 
>> A summary of changes since 13.0-RC4 includes:
>> 
>> o COMPAT_FREEBSD32 fill/set dbregs/fpregs has been implemented for
>>   aarch64.
>> 
>> o Miscellaneous DTrace updates.
>> 
>> o An issue that could potentially affect some services to properly
>>   restart, notably Nginx, has been addressed.
>> 
>> o Miscellaneous networking fixes.
>> 
>> A list of changes since 12.2-RELEASE is available in the releng/13.0
>> release notes:
>> 
>> https://www.freebsd.org/releases/13.0R/relnotes.html
>> 
>> Please note, the release notes page is not yet complete, and will be
>> updated on an ongoing basis as the 13.0-RELEASE cycle progresses.
>> 
>> === Virtual Machine Disk Images ===
>> 
>> VM disk images are available for the amd64, i386, and aarch64
>> architectures.  Disk images may be downloaded from the following URL
>> (or any of the FreeBSD download mirrors):
>> 
>> https://download.freebsd.org/ftp/releases/VM-IMAGES/13.0-RC5/
>> 
>> The partition layout is:
>> 
>> ~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
>> ~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
>> ~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)
>> 
>> The disk images are available in QCOW2, VHD, VMDK, and raw disk image
>> formats.  The image download size is approximately 135 MB and 165 MB
>> respectively (amd64/i386), decompressing to a 21 GB sparse image.
>> 
>> Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
>> loader file is needed for qemu-system-aarch64 to be able to boot the
>> virtual machine images.  See this page for more information:
>> 
>> https://wiki.freebsd.org/arm64/QEMU
>> 
>> To boot the VM image, run:
>> 
>> % qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
>> -bios QEMU_EFI.fd -serial telnet::,server -nographic \
>> -drive if=none,file=VMDISK,id=hd0 \
>> -device virtio-blk-device,drive=hd0 \
>> -device virtio-net-device,netdev=net0 \
>> -netdev user,id=net0
>> 
>> Be sure to replace "VMDISK" with the path to the virtual machine image.
>> 
>> BASIC-CI images can be found at:
>> 
>> https://download.freebsd.org/ftp/releases/CI-IMAGES/13.0-RC5/
>> 
>> === Amazon EC2 AMI Images ===
>> 
>> FreeBSD/amd64 EC2 AMIs are available in the following regions:
>> 
>>   af-south-1 region: ami-0fe76e3a8c6a8d108
>>   eu-north-1 region: ami-0fe9d5e3fd7bd2972
>>   ap-south-1 region: ami-0090069af2f905566
>>   eu-west-3 region: ami-042ea753bff8d6a9d
>>   eu-west-2 region: ami-08e0358d71a41ce97
>>   eu-south-1 region: ami-0a1cb76bf83c3c49c
>>   eu-west-1 region: ami-0559fa7d3edc6e607
>>   ap-northeast-3 region: ami-04492324222abcb1b
>>   ap-northeast-2 region: ami-0e851ff1f260888fd
>>   me-south-1 region: ami-087ab54ec6e4d0cbb
>>   ap-northeast-1 region: ami-0796973b853fba5e0
>>   sa-east-1 region: ami-03f738fc556689a14
>>   ca-central-1 region: ami-05f22cba7be241fbe
>>   ap-east-1 region: ami-07ac68b5cc29039bc
>>   ap-southeast-1 region: ami-04a8c807e53f07e72
>>   ap-southeast-2 region: ami-097dc8195cad3a688
>>   eu-central-1 region: ami-013f760d364d2d6a7
>>   us-east-1 region: ami-0e5adeb6a86cb63c4
>>   us-east-2 region: ami-04aac5053216613b1
>>   us-west-1 region: 

Re: FreeBSD 13.0-RC5 Now Available

2021-04-04 Thread Alan Somers
On Sun, Apr 4, 2021 at 3:38 PM Colin Percival  wrote:

> On 4/4/21 1:50 PM, Alan Somers wrote:
> > On Sat, Apr 3, 2021 at 9:34 AM Glen Barber  > > wrote:
> >
> > The fifth RC build of the 13.0-RELEASE release cycle is now available.
> >
> > In the past, making these releases required pushing updates to
> > https://svnweb.freebsd.org/base/user/cperciva/freebsd-update-build/ .
>
> Historically, we often made changes directly on the update builders and
> then brought the svn tree back into sync later.
>
> > However, that repo is read-only now.  I assume that it's been gitified,
> but
> > I can't find the new location.  Where is it?
>
> I think the freebsd-update build code might be homeless right now.  I know
> I
> have seen emails mentioning that it needs to land somewhere but I don't
> recall
> any decision being reached.
>

I vote for https://github.com/freebsd/freebsd-update-build .
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: FreeBSD 13.0-RC5 Now Available

2021-04-04 Thread Alan Somers
On Sat, Apr 3, 2021 at 9:34 AM Glen Barber  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> The fifth RC build of the 13.0-RELEASE release cycle is now available.
>
> Installation images are available for:
>
> o 13.0-RC5 amd64 GENERIC
> o 13.0-RC5 i386 GENERIC
> o 13.0-RC5 powerpc GENERIC
> o 13.0-RC5 powerpc64 GENERIC64
> o 13.0-RC5 powerpc64le GENERIC64LE
> o 13.0-RC5 powerpcspe MPC85XXSPE
> o 13.0-RC5 armv6 RPI-B
> o 13.0-RC5 armv7 GENERICSD
> o 13.0-RC5 aarch64 GENERIC
> o 13.0-RC5 aarch64 RPI
> o 13.0-RC5 aarch64 PINE64
> o 13.0-RC5 aarch64 PINE64-LTS
> o 13.0-RC5 aarch64 PINEBOOK
> o 13.0-RC5 aarch64 ROCK64
> o 13.0-RC5 aarch64 ROCKPRO64
> o 13.0-RC5 riscv64 GENERIC
> o 13.0-RC5 riscv64 GENERICSD
>
> Note regarding arm SD card images: For convenience for those without
> console access to the system, a freebsd user with a password of
> freebsd is available by default for ssh(1) access.  Additionally,
> the root user password is set to root.  It is strongly recommended
> to change the password for both users after gaining access to the
> system.
>
> Installer images and memory stick images are available here:
>
> https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.0/
>
> The image checksums follow at the end of this e-mail.
>
> If you notice problems you can report them through the Bugzilla PR
> system or on the -stable mailing list.
>
> If you would like to use Git to do a source based update of an existing
> system, use the "releng/13.0" branch.
>
> A summary of changes since 13.0-RC4 includes:
>
> o COMPAT_FREEBSD32 fill/set dbregs/fpregs has been implemented for
>   aarch64.
>
> o Miscellaneous DTrace updates.
>
> o An issue that could potentially affect some services to properly
>   restart, notably Nginx, has been addressed.
>
> o Miscellaneous networking fixes.
>
> A list of changes since 12.2-RELEASE is available in the releng/13.0
> release notes:
>
> https://www.freebsd.org/releases/13.0R/relnotes.html
>
> Please note, the release notes page is not yet complete, and will be
> updated on an ongoing basis as the 13.0-RELEASE cycle progresses.
>
> === Virtual Machine Disk Images ===
>
> VM disk images are available for the amd64, i386, and aarch64
> architectures.  Disk images may be downloaded from the following URL
> (or any of the FreeBSD download mirrors):
>
> https://download.freebsd.org/ftp/releases/VM-IMAGES/13.0-RC5/
>
> The partition layout is:
>
> ~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
> ~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
> ~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)
>
> The disk images are available in QCOW2, VHD, VMDK, and raw disk image
> formats.  The image download size is approximately 135 MB and 165 MB
> respectively (amd64/i386), decompressing to a 21 GB sparse image.
>
> Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
> loader file is needed for qemu-system-aarch64 to be able to boot the
> virtual machine images.  See this page for more information:
>
> https://wiki.freebsd.org/arm64/QEMU
>
> To boot the VM image, run:
>
> % qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
> -bios QEMU_EFI.fd -serial telnet::,server -nographic \
> -drive if=none,file=VMDISK,id=hd0 \
> -device virtio-blk-device,drive=hd0 \
> -device virtio-net-device,netdev=net0 \
> -netdev user,id=net0
>
> Be sure to replace "VMDISK" with the path to the virtual machine image.
>
> BASIC-CI images can be found at:
>
> https://download.freebsd.org/ftp/releases/CI-IMAGES/13.0-RC5/
>
> === Amazon EC2 AMI Images ===
>
> FreeBSD/amd64 EC2 AMIs are available in the following regions:
>
>   af-south-1 region: ami-0fe76e3a8c6a8d108
>   eu-north-1 region: ami-0fe9d5e3fd7bd2972
>   ap-south-1 region: ami-0090069af2f905566
>   eu-west-3 region: ami-042ea753bff8d6a9d
>   eu-west-2 region: ami-08e0358d71a41ce97
>   eu-south-1 region: ami-0a1cb76bf83c3c49c
>   eu-west-1 region: ami-0559fa7d3edc6e607
>   ap-northeast-3 region: ami-04492324222abcb1b
>   ap-northeast-2 region: ami-0e851ff1f260888fd
>   me-south-1 region: ami-087ab54ec6e4d0cbb
>   ap-northeast-1 region: ami-0796973b853fba5e0
>   sa-east-1 region: ami-03f738fc556689a14
>   ca-central-1 region: ami-05f22cba7be241fbe
>   ap-east-1 region: ami-07ac68b5cc29039bc
>   ap-southeast-1 region: ami-04a8c807e53f07e72
>   ap-southeast-2 region: ami-097dc8195cad3a688
>   eu-central-1 region: ami-013f760d364d2d6a7
>   us-east-1 region: ami-0e5adeb6a86cb63c4
>   us-east-2 region: ami-04aac5053216613b1
>   us-west-1 region: ami-07a9e536124bc5cd3
>   us-west-2 region: ami-0d590e9beb5038bd0
>
> FreeBSD/aarch64 EC2 AMIs are available in the following regions:
>
>   af-south-1 region: ami-085df8d192daddc93
>   eu-north-1 region: ami-035b8e0f104183bde
>   ap-south-1 region: ami-0224f547eb20ded51
>   eu-west-3 region: ami-092ad93b82e3a558a
>   eu-west-2 region: ami-0567fa1f5daa6c238
>   

Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Poul-Henning Kamp

Konstantin Belousov writes:

> > B) We lack a nuanced call-back to tell the subsystems to release some of 
> > their memory "without major delay".

> The delay in the wall clock sense does not drive the issue.

I didnt say anything about "wall clock" and you're missing my point by a wide 
margin.

We need to make major memory consumers, like vnodes take action *before* 
shortages happen, so that *when* they happen, a lot of memory can be released 
to relive them.

> We cannot expect any io to proceed while we are low on memory [...]

Which is precisely why the top level goal should be for that to never happen, 
while still allowing the freeable" memory to be used as a cache as much as 
possible.

> > C) We have never attempted to enlist userland, where jemalloc often hang on 
> > to a lot of unused VM pages.
> > 
> The userland does not add to this problem, [...]

No, but userland can help solve it:  The unused pages from jemalloc/userland 
can very quickly be released to relieve any imminent shortage the kernel might 
have.

As can pages from vnodes, and for that matter socket buffers.

But there are always costs, actual costs, ie: what it will take to release the 
memory (locking, VM mappings, washing) and potential costs (lack of future 
caching opportunities).

These costs need to be presented to the central memory allocator, so when it 
decides back-pressure is appropriate, it can decide who to punk for how much 
memory.

> But normally operating system does not have an issue with user pages.  

Only if you disregard all non-UNIX operating systems.

Many other kernels have cooperated with userland to balance memory (and for 
that matter disk-space).

Just imagine how much better the desktop experience would be, if we could send 
SIGVM to firefox to tell it stop being a memory-pig.

(At least two of the major operating systems in the desktop world does 
something like that today.)

> Io latency is not the factor there. We must avoid situations where
> instantiating a vnode stalls waiting for KVA to appear, similarly we
> must avoid system state where vnodes allocation consumed so much kmem
> that other allocations stall.

My argument is the precise opposite:  We must make vnodes and the allocations 
they cause responsive to the sytems overall memory availability, well in 
advance of the shortage happening in the first place.

> Quite indicative is that we do not shrink the vnode list on low memory
> events.  Vnlru also does not account for the memory pressure.

The only reason we do not, is that we cannot tell definitively if freeing a 
vnode will cause disk-I/O (which may not matter with SSD's) or even how much 
memory it might free, if anything.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Konstantin Belousov
On Sun, Apr 04, 2021 at 07:01:44PM +, Poul-Henning Kamp wrote:
> 
> Konstantin Belousov writes:
> 
> > But what would you provide as the input for PID controller, and what would 
> > be the targets?
> 
> Viewing this purely as a vnode related issue is wrong, this is about memory 
> allocation in general.
> 
> We may or may not want a PID regulator, but putting it on counts of vnode 
> would not improve things, precisely, as you point out, because the amount of 
> memory a vnode ties up has enormous variance.
> 
Yes

> 
> We should focus on the end goal: To ensure "sufficient" memory can always be 
> allocated for any purpose "without major delay".
> 
and no

> 
> Architecturally there are three major problems:
> 
> A) While each subsystem generally have a good idea about memory that can be 
> released "without major delay", the information does not trickle up through a 
> summarizing NUMA aware tree.
> 
> B) We lack a nuanced call-back to tell the subsystems to release some of 
> their memory "without major delay".
The delay in the wall clock sense does not drive the issue.
We cannot expect any io to proceed while we are low on memory, in the sense
that allocators cannot respond right now.  More and more, our io subsystem
requires allocating memory to make any progress with io.  This is already
quite bad with geom, although some hacks make it not too outstanding.

It is very bad with ZFS, where swap on zvols causes deadlocks almost
immediately.

> 
> C) We have never attempted to enlist userland, where jemalloc often hang on 
> to a lot of unused VM pages.
> 
The userland does not add to this problem, because pagedaemon typically has
enough processing power to convert user-allocated pages into usable clean
or free pages.  Of course, if there is no swap and dirty anon page cannot
be launder, the issue would accumulate.

But normally operating system does not have an issue with user pages.  

> 
> As far as vnodes go:
> 
> 
> It used to be that "without major delay" meant "without disk-I/O" which again 
> led to the "dirty buffers/VM pages" heuristic.
> 
> With microsecond SSD backing store, that heuristic is not only invalid, it is 
> down-right harmful in many cases.
> 
> GEOM maintains estimates of per-provider latency and VM+VFS should use that 
> to schedule write-back so that more of it happens outside rush-hour, in order 
> to increase the amount of memory which can be released "without major delay".
> 
> Today that happens largely as a side effect of the periodic syncer, which 
> does a really bad job at it, because it still expects VAX-era hardware 
> performance and workloads.
> 
Io latency is not the factor there. We must avoid situations where
instantiating a vnode stalls waiting for KVA to appear, similarly we
must avoid system state where vnodes allocation consumed so much kmem
that other allocations stall.

Quite indicative is that we do not shrink the vnode list on low memory
events.  Vnlru also does not account for the memory pressure.

Problem is that it is not clear how to express that relations between
safe allocators state and our desire to cache file system data, which is
bound to the vnode identity.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Poul-Henning Kamp

Konstantin Belousov writes:

> But what would you provide as the input for PID controller, and what would be 
> the targets?

Viewing this purely as a vnode related issue is wrong, this is about memory 
allocation in general.

We may or may not want a PID regulator, but putting it on counts of vnode would 
not improve things, precisely, as you point out, because the amount of memory a 
vnode ties up has enormous variance.


We should focus on the end goal: To ensure "sufficient" memory can always be 
allocated for any purpose "without major delay".


Architecturally there are three major problems:

A) While each subsystem generally have a good idea about memory that can be 
released "without major delay", the information does not trickle up through a 
summarizing NUMA aware tree.

B) We lack a nuanced call-back to tell the subsystems to release some of their 
memory "without major delay".

C) We have never attempted to enlist userland, where jemalloc often hang on to 
a lot of unused VM pages.


As far as vnodes go:


It used to be that "without major delay" meant "without disk-I/O" which again 
led to the "dirty buffers/VM pages" heuristic.

With microsecond SSD backing store, that heuristic is not only invalid, it is 
down-right harmful in many cases.

GEOM maintains estimates of per-provider latency and VM+VFS should use that to 
schedule write-back so that more of it happens outside rush-hour, in order to 
increase the amount of memory which can be released "without major delay".

Today that happens largely as a side effect of the periodic syncer, which does 
a really bad job at it, because it still expects VAX-era hardware performance 
and workloads.


-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Konstantin Belousov
On Sun, Apr 04, 2021 at 08:45:41AM -0600, Warner Losh wrote:
> On Sun, Apr 4, 2021, 5:51 AM Mateusz Guzik  wrote:
> 
> > On 4/3/21, Poul-Henning Kamp  wrote:
> > > 
> > > Mateusz Guzik writes:
> > >
> > >> It is high because of this:
> > >> msleep(_sig, _list_mtx, PVFS, "vlruwk",
> > >> hz);
> > >>
> > >> i.e. it literally sleeps for 1 second.
> > >
> > > Before the line looked like that, it slept on "lbolt" aka "lightning
> > > bolt" which was woken once a second.
> > >
> > > The calculations which come up with those "constants" have always
> > > been utterly bogus math, not quite "square-root of shoe-size
> > > times sun-angle in Patagonia", but close.
> > >
> > > The original heuristic came from university environments with tons of
> > > students doing assignments and nethack behind VT102 terminals, on
> > > filesystems where files only seldom grew past 100KB, so it made sense
> > > to scale number of vnodes to how much RAM was in the system, because
> > > that also scaled the size of the buffer-cache.
> > >
> > > With a merged VM buffer-cache, whatever validity that heuristic had
> > > was lost, and we tweaked the bogomath in various ways until it
> > > seemed to mostly work, trusting the users for which it did not, to
> > > tweak things themselves.
> > >
> > > Please dont tweak the Finagle Constants again.
> > >
> > > Rip all that crap out and come up with something fundamentally better.
> > >
> >
> > Some level of pacing is probably useful to control total memory use --
> > there can be A LOT of memory tied up in mere fact that vnode is fully
> > cached. imo the thing to do is to come up with some watermarks to be
> > revisited every 1-2 years and to change the behavior when they get
> > exceeded -- try to whack some stuff but in face of trouble just go
> > ahead and alloc without sleep 1. Should the load spike sort itself
> > out, vnlru will slowly get things down to the watermark. If the
> > watermark is too low, maybe it can autotune. Bottom line is that even
> > with the current idea of limiting preferred total vnode count, the
> > corner case behavior can be drastically better suffering SOME perf
> > loss from recycling vnodes, but not sleeping for a second for every
> > single one.
> >
> 
> I'd suggest that going directly to a PID to control this would be better
> than the watermarks. That would give a smoother response than high/low
> watermarks would. While you'd need some level to keep things at still, the
> laundry stuff has shown the precise level of that level is less critical
> than the watermarks.
But what would you provide as the input for PID controller, and what
would be the targets?

The main reason for the (almost) hard cap on the number of vnodes is not
that excessive number of vnodes is harmful by itself.  Each allocated
vnode typically implies existence of several second-order allocations
that accumulate into significant KVA usage:
- filesystem inode
- vm object
- namecache entries
There are usually even more allocations, third-order, for instance UFS
inode carries a pointer to the dinode copy in RAM, and possibly EA area.
And of course, the fact that vnode names pages in the page cache owned by
corresponding file, i.e. amount of allocated vnodes regulates amount of
work for pagedaemon.

We currently trying to put some rational limit for total number of vnodes,
estimating both KVA and physical memory consumed by them.  If you remove
that limit, you need to ensure that we do not create OOM situation either
for KVA or for physical memory just by creating too many vnodes, otherwise
system cannot get out of it.

So there are some combinations of machine config (RAM) and loads where 
default settings are arguably low.  Raising the limits needs to handle
the indirect resource usage from vnode.

I do not know how to write the feedback formula, taking into account all
the consequences of the vnode existence, and that effects depend also on
the underlying filesystem and patterns of VM paging usage.  In this sense
ZFS is probably simplest case, because its caching subsystem is autonomous.
While UFS or NFS are tightly integrated with VM.

> 
> Warner
> 
> I think the notion of 'struct vnode' being a separately allocated
> > object is not very useful and it comes with complexity (and happens to
> > suffer from several bugs).
> >
> > That said, the easiest and safest thing to do in the meantime is to
> > bump the limit. Perhaps the sleep can be whacked as it is which would
> > largely sort it out.
> >
> > --
> > Mateusz Guzik 
> > ___
> > freebsd-current@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
> >
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to 

Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Warner Losh
On Sun, Apr 4, 2021, 5:51 AM Mateusz Guzik  wrote:

> On 4/3/21, Poul-Henning Kamp  wrote:
> > 
> > Mateusz Guzik writes:
> >
> >> It is high because of this:
> >> msleep(_sig, _list_mtx, PVFS, "vlruwk",
> >> hz);
> >>
> >> i.e. it literally sleeps for 1 second.
> >
> > Before the line looked like that, it slept on "lbolt" aka "lightning
> > bolt" which was woken once a second.
> >
> > The calculations which come up with those "constants" have always
> > been utterly bogus math, not quite "square-root of shoe-size
> > times sun-angle in Patagonia", but close.
> >
> > The original heuristic came from university environments with tons of
> > students doing assignments and nethack behind VT102 terminals, on
> > filesystems where files only seldom grew past 100KB, so it made sense
> > to scale number of vnodes to how much RAM was in the system, because
> > that also scaled the size of the buffer-cache.
> >
> > With a merged VM buffer-cache, whatever validity that heuristic had
> > was lost, and we tweaked the bogomath in various ways until it
> > seemed to mostly work, trusting the users for which it did not, to
> > tweak things themselves.
> >
> > Please dont tweak the Finagle Constants again.
> >
> > Rip all that crap out and come up with something fundamentally better.
> >
>
> Some level of pacing is probably useful to control total memory use --
> there can be A LOT of memory tied up in mere fact that vnode is fully
> cached. imo the thing to do is to come up with some watermarks to be
> revisited every 1-2 years and to change the behavior when they get
> exceeded -- try to whack some stuff but in face of trouble just go
> ahead and alloc without sleep 1. Should the load spike sort itself
> out, vnlru will slowly get things down to the watermark. If the
> watermark is too low, maybe it can autotune. Bottom line is that even
> with the current idea of limiting preferred total vnode count, the
> corner case behavior can be drastically better suffering SOME perf
> loss from recycling vnodes, but not sleeping for a second for every
> single one.
>

I'd suggest that going directly to a PID to control this would be better
than the watermarks. That would give a smoother response than high/low
watermarks would. While you'd need some level to keep things at still, the
laundry stuff has shown the precise level of that level is less critical
than the watermarks.

Warner

I think the notion of 'struct vnode' being a separately allocated
> object is not very useful and it comes with complexity (and happens to
> suffer from several bugs).
>
> That said, the easiest and safest thing to do in the meantime is to
> bump the limit. Perhaps the sleep can be whacked as it is which would
> largely sort it out.
>
> --
> Mateusz Guzik 
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
>
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: [SOLVED] Re: Strange behavior after running under high load

2021-04-04 Thread Mateusz Guzik
On 4/3/21, Poul-Henning Kamp  wrote:
> 
> Mateusz Guzik writes:
>
>> It is high because of this:
>> msleep(_sig, _list_mtx, PVFS, "vlruwk",
>> hz);
>>
>> i.e. it literally sleeps for 1 second.
>
> Before the line looked like that, it slept on "lbolt" aka "lightning
> bolt" which was woken once a second.
>
> The calculations which come up with those "constants" have always
> been utterly bogus math, not quite "square-root of shoe-size
> times sun-angle in Patagonia", but close.
>
> The original heuristic came from university environments with tons of
> students doing assignments and nethack behind VT102 terminals, on
> filesystems where files only seldom grew past 100KB, so it made sense
> to scale number of vnodes to how much RAM was in the system, because
> that also scaled the size of the buffer-cache.
>
> With a merged VM buffer-cache, whatever validity that heuristic had
> was lost, and we tweaked the bogomath in various ways until it
> seemed to mostly work, trusting the users for which it did not, to
> tweak things themselves.
>
> Please dont tweak the Finagle Constants again.
>
> Rip all that crap out and come up with something fundamentally better.
>

Some level of pacing is probably useful to control total memory use --
there can be A LOT of memory tied up in mere fact that vnode is fully
cached. imo the thing to do is to come up with some watermarks to be
revisited every 1-2 years and to change the behavior when they get
exceeded -- try to whack some stuff but in face of trouble just go
ahead and alloc without sleep 1. Should the load spike sort itself
out, vnlru will slowly get things down to the watermark. If the
watermark is too low, maybe it can autotune. Bottom line is that even
with the current idea of limiting preferred total vnode count, the
corner case behavior can be drastically better suffering SOME perf
loss from recycling vnodes, but not sleeping for a second for every
single one.

I think the notion of 'struct vnode' being a separately allocated
object is not very useful and it comes with complexity (and happens to
suffer from several bugs).

That said, the easiest and safest thing to do in the meantime is to
bump the limit. Perhaps the sleep can be whacked as it is which would
largely sort it out.

-- 
Mateusz Guzik 
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Blacklisted certificates

2021-04-04 Thread Ronald Klop

On 3/31/21 4:19 PM, Jochen Neumeister wrote:


Am 31.03.21 um 14:24 schrieb Ronald Klop:


Van: Jochen Neumeister 
Datum: woensdag, 31 maart 2021 13:26
Aan: Christoph Moench-Tegeder , 
freebsd-current@freebsd.org

Onderwerp: Re: Blacklisted certificates



Am 31.03.21 um 13:02 schrieb Christoph Moench-Tegeder:
> ## Jochen Neumeister (jon...@freebsd.org):
>
>> Why are this certificates blacklisted?
> Various reasons:
> - Symantec (which owned Thawte and VeriSign back in the time) made
>    the news in a bad way:
> 
https://www.theregister.com/2017/09/12/chrome_66_to_reject_symantec_certs/ 


> - some certificates are simply expired
> - some certificates use SHA-1 ("sha1WithRSAEncryption") which is
>    beyond deprecated
> - and basically "whatever Mozilla did", as the certificates are
>    imported from NSS.

how can I ignore the certificates now? So now everyone has this 
problem with an update



Greetings
Jochen

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to 
"freebsd-current-unsubscr...@freebsd.org"






Hi,

This is the proper output of installworld. So you don't have to ignore 
anything anymore. It is handled by installworld.




in the next step etcupdate has another problem. I have to delete the 
blacklist certificates manually.


#cd /usr/src && etcupdate
Conflicts remain from previous update, aborting.


Greetings
Jochen






I'd guess you need to run "etcupdate resolve". What is the output of 
"etcupdate status"?


Regards,
Ronald.

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: systat -swap to display large swap space users

2021-04-04 Thread tech-lists

Hi,

On Fri, Apr 02, 2021 at 08:12:14PM -0400, Yoshihiro Ota wrote:

Hi,

We do not seem to have a nice way to see current swap space usage per process.
I updated systat to use libprocstat to obtain such infomation and display
along with swap devise/file stats.


Unfotunately your patch gets rejected on recent stable/13 and main/14

--
J.


signature.asc
Description: PGP signature


Re: FreeBSD 13.0-RC5 Now Available

2021-04-04 Thread Rodney W. Grimes
> On 4/3/21 3:34 PM, Glen Barber wrote:
> > The fifth RC build of the 13.0-RELEASE release cycle is now available.
> > 
> 
> Beautiful. If we see RC8 then that is fine. Testing is a wonderful
> process and I feel far better about a well tested release than an
> instant "oops" with 13.1 kicked out a week later.

BUT this is not more testing for the sake of good testing, this
is purely incidental testing because of regressions in the product.
This is, IMHO, the worst kind of testing.

> 
> Also, I really am waiting to see the ten year old bug 159356 laid
> to rest :
> 
> [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-specific
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=159356
> 
> Sort of a thorn in my side for years. Regardless, release candidates
> are a "good thing"(tm).

Not really, they indicate a lack of Quality Assurance and Control,
and without those principles you can test tell your blue in the
face and never actually get anyplace.

There is a premise in the product quality assurance sector,
"You cannot test in quality".

> 
> -- 
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken
> GreyBeard and suspenders optional

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"