11.4-RELEASE make delete-old

2020-06-26 Thread Greg Balfour
On a fresh install of 11.4-RELEASE, rebuilding the operating system
results in several files being deleted during the "make delete-old"
step.  This surprised me.  I wouldn't have expected this on a rebuild
of a new install without any updates applied.  See below, but for
example /usr/bin/llvm-ar is present after the initial install but is then
removed by the "make delete-old" step.  Is this to be expected?
Is the correct action to respond y when prompted about the files?

root@test:/usr/src # make -j 4 buildworld buildkernel
...
root@test:/usr/src # make installkernel
...
root@test:/usr/src # make installworld
...
root@test:/usr/src # make delete-old
>>> Removing old files (only deletes safe to delete libs)
remove /usr/bin/llvm-ar? y
remove /usr/lib/debug/usr/bin/llvm-ar.debug? y
remove /usr/bin/llvm-nm? y
remove /usr/lib/debug/usr/bin/llvm-nm.debug? y
remove /usr/bin/llvm-ranlib? y
remove /usr/share/man/man1/llvm-ar.1.gz? y
remove /usr/share/man/man1/llvm-nm.1.gz? y
remove /usr/share/man/man1/llvm-symbolizer.1.gz? y
>>> Old files removed
>>> Removing old directories
>>> Old directories removed
To remove old libraries run 'make delete-old-libs'.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Ronald Klop


Van: Bob Bishop 
Datum: vrijdag, 26 juni 2020 17:18
Aan: Peter Jeremy 
CC: Donald Wilde , freebsd-stable 

Onderwerp: Re: swap space issues




> On 26 Jun 2020, at 11:23, Peter Jeremy  wrote:
>
> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>>
>> Device  1K-blocks UsedAvail Capacity
>> /dev/ada0s1b 335544320 33554432 0%
>> /dev/ada0s1d 335544320 33554432 0%
>> Total671088640 67108864 0%
>
> I strongly suggest you don't have more than one swap device on spinning
> rust - the VM system will stripe I/O across the available devices and
> that will give particularly poor results when it has to seek between the
> partitions.
 
If you configure a ZFS mirror in bsdinstall you get a swap partition per drive by default.


If you are running on multiple disks (a mirror) it can provide extra speed. The 
example above is on the same disk. On one disk multiple swap partitions will 
only spread the data non-optimal for the heads of the disk.


> 

> Also, you can't actually use 64GB swap with 4GB RAM.  If you look back
> through your boot messages, I expect you'll find messages like:
> warning: total configured swap (524288 pages) exceeds maximum recommended 
amount (498848 pages).
> warning: increase kern.maxswzone or reduce amount of swap.
> or maybe:
> WARNING: reducing swap size to maximum of MB per unit
>
> The absolute limit on swap space is vm.swap_maxpages pages but the realistic
> limit is about half that.  By default the realistic limit is about 4×RAM (on
> 64-bit architectures), but this can be adjusted via kern.maxswzone (which
> defines the #bytes of RAM to allocate to swzone structures - the actual
> space allocated is vm.swzone).
>
> As a further piece of arcana, vm.pageout_oom_seq is a count that controls
> the number of passes before the pageout daemon gives up and starts killing
> processes when it can't free up enough RAM.  "out of swap space" messages
> generally mean that this number is too low, rather than there being a
> shortage of swap - particularly if your swap device is rather slow.
>
> --
> Peter Jeremy


--
Bob Bishop   t: +44 (0)118 940 1243
r...@gid.co.uk m: +44 (0)783 626 4518




 



 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Bob Bishop


> On 26 Jun 2020, at 11:23, Peter Jeremy  wrote:
> 
> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>> 
>> Device  1K-blocks UsedAvail Capacity
>> /dev/ada0s1b 335544320 33554432 0%
>> /dev/ada0s1d 335544320 33554432 0%
>> Total671088640 67108864 0%
> 
> I strongly suggest you don't have more than one swap device on spinning
> rust - the VM system will stripe I/O across the available devices and
> that will give particularly poor results when it has to seek between the
> partitions.

If you configure a ZFS mirror in bsdinstall you get a swap partition per drive 
by default.

> Also, you can't actually use 64GB swap with 4GB RAM.  If you look back
> through your boot messages, I expect you'll find messages like:
> warning: total configured swap (524288 pages) exceeds maximum recommended 
> amount (498848 pages).
> warning: increase kern.maxswzone or reduce amount of swap.
> or maybe:
> WARNING: reducing swap size to maximum of MB per unit
> 
> The absolute limit on swap space is vm.swap_maxpages pages but the realistic
> limit is about half that.  By default the realistic limit is about 4×RAM (on
> 64-bit architectures), but this can be adjusted via kern.maxswzone (which
> defines the #bytes of RAM to allocate to swzone structures - the actual
> space allocated is vm.swzone).
> 
> As a further piece of arcana, vm.pageout_oom_seq is a count that controls
> the number of passes before the pageout daemon gives up and starts killing
> processes when it can't free up enough RAM.  "out of swap space" messages
> generally mean that this number is too low, rather than there being a
> shortage of swap - particularly if your swap device is rather slow.
> 
> --
> Peter Jeremy


--
Bob Bishop   t: +44 (0)118 940 1243
r...@gid.co.uk m: +44 (0)783 626 4518







signature.asc
Description: Message signed with OpenPGP


Re: swap space issues

2020-06-26 Thread Paul Mather
On Jun 26, 2020, at 6:58 AM, Stefan Eßer  wrote:

> Am 26.06.20 um 12:23 schrieb Peter Jeremy:
>> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>>> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>>> 
>>> Device  1K-blocks UsedAvail Capacity
>>> /dev/ada0s1b 335544320 33554432 0%
>>> /dev/ada0s1d 335544320 33554432 0%
>>> Total671088640 67108864 0%
>> 
>> I strongly suggest you don't have more than one swap device on spinning
>> rust - the VM system will stripe I/O across the available devices and
>> that will give particularly poor results when it has to seek between the
>> partitions.
> 
[[...]]
>> As a further piece of arcana, vm.pageout_oom_seq is a count that controls
>> the number of passes before the pageout daemon gives up and starts killing
>> processes when it can't free up enough RAM.  "out of swap space" messages
>> generally mean that this number is too low, rather than there being a
>> shortage of swap - particularly if your swap device is rather slow.
> 
> I'm not sure that this specific sysctl is documented in such a way
> that it is easy to find by people suffering from out-of-memory kills.
> 
> Perhaps it could be mentioned as a parameter that may need tuning in
> the OOM message?
> 
> And while it does not come up that often in the mail list, it might
> be better for many kinds of application if the default was increased
> (a longer wait for resources might be more acceptable than the loss
> of all results of a long running computation).


The OOM issue is more pressing on platforms like FreeBSD/arm that tend to have 
low RAM and slow writable storage such as SD card.  There have been several 
threads on the issues this creates (e.g., 
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=228789+0+archive/2018/freebsd-arm/20180819.freebsd-arm
 
)
 that have led to some insight into how to tune the OOM killer.  One thing that 
becomes clear is that the "Out of swap space" error message is misleading as 
often it really means "Couldn't obtain RAM in a timely fashion."  On hardware 
such as the Raspberry Pi, it's often the case that the system has enough swap 
space: it's just that it can't write to swap on SD card before the default 
vm.pageout_oom_seq passes are exhausted and so the OOM killer starts reaping 
active processes (like the clang trying to build clang:), and all sorts of 
things start to break. :-)

Cheers,

Paul.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Donald Wilde
On 6/26/20, Stefan Eßer  wrote:
> Am 26.06.20 um 12:23 schrieb Peter Jeremy:
>> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>>> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
[snip]
> An idea for a better strategy:
>
> It might be better to use an allocation algorithm that assigns a
> swap device to each running process that needs pages written to the
> swap device and only assign another swap device (and use if from
> then on for that process) if there is no free space left on the one
> used until then.
>
> Such a strategy would at least reduce the number of processes that
> need all configured swap devices at the same time in a striped
> configuration.
>
> If all processes start with the first configured swap device assigned
> to them, this will lead to only one of them being used until it fills
> up, then progressing to the next one.
>
> The strategy of whether the initial swap device assigned to a process
> is always the first one configured in the system, or whether after
> that could not be used by some process is moved on to the next one
> (typically the one assigned to that process for further page-outs) is
> not obvious to me.

You're getting over my head, STefan, but that's okay. I suspect that
having somebody be loony -- and desperate enough -- to configure two
swap partitions is a rare occurance.
>
> The behavior could be controlled by a sysctl to allow to adapt the
> strategy to the hardware (e.g. rotating vs. flash disks for swap).

Not to mention Intel and Micron and their fancy fast non-volatile
chips ('Optane'). I do agree that SOMEBODY is going to need this kind
of sysctl guidance for the kernel.
>
[snip]

> And while it does not come up that often in the mail list, it might
> be better for many kinds of application if the default was increased
> (a longer wait for resources might be more acceptable than the loss
> of all results of a long running computation).

Yes. Synth seems to be able to keep going / recover from last-known
success points, but your point is very valid. As we go further into
OOP, the _controllable_ use of heap space, stack space, and recursion
is going to become more crucial. We humans are used to operating
without a "full stack," (sleep DOES help :) ) but I think the whole
point of modern AI is to create systems that actually can master the
logical inference chains proposed by the early LISP guys at MIT, C-M,
and Stanford. The gains from ML have been enough to keep 'the Street'
happy for now but they'll want more soon enough.

> Regards, STefan

Thought-provoking indeed. :D

-- 
Don Wilde

* What is the Internet of Things but a system *
* of systems including humans? *

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Donald Wilde
On 6/25/20, Greg 'groggy' Lehey  wrote:
> On Thursday, 25 June 2020 at 19:31:34 -0700, Donald Wilde wrote:
>> On 6/25/20, Greg 'groggy' Lehey  wrote:
>>> On Wednesday, 24 June 2020 at 23:27:27 -0700, Kevin Oberman wrote,
>>> without trimming:
>>>
 On Wed, Jun 24, 2020 at 10:30 PM Greg 'groggy' Lehey 
 wrote:
>>>
 gpart(8) works just fine on MBR drives and partitions/slices and
 has a much friendlier user interface. "gpart resize" is the
 command you want.
>>>
>>> Thanks.  I try to offer suggestions that I've tried, and offer an
>>> example.  I haven't tried 'gpart resize', but it looks much easier.
>>
>> 'gpart resize' did work well,
>
> Yes, I saw that from the gpart output you posted.
>
>> although the man page for gpart assumes way too much. I was able to
>> successfully work my way through and create ('gpart add') and mount
>> not just one but two 32G swap partitions.
>
> Yes, I saw that too.  Not quite what I was suggesting: I suspected
> some overflow issue, so the partitions should really have been a
> little shy of 32 GB.  And at least for the start you should only mount
> one of them.  In the unlikely event that it should threaten to fill
> up, you can still mount the other one without rebooting (swapon(1)).

I got greedy! :) i also wanted to embed my newfound understanding of
gpart, geom, and swapping into the noggin so I could move on.
>
> How are things looking now?

So far, it works, but there haven't been enough changes in the ports
tree that synth even needs a cuppa.

I think what I am going to do is to wipe the machine one more time
from 12.1R (with 16G swap, as both you and Peter suggest), STABLE-ize
it, and see if synth can handle the entire installed ports tree
(around 300 primary+dependency ports) without crashing and with
conservative builder-subtask limits. It will take it three days to do
that, but I think I know enough of the gotchas to get it to succeed.

It's a good idea to have a second one in reserve, but when it broke
through the roof it happened way too fast for me to actually enable
anything. I'll see if I can make a one-character script invocation and
try it, though!

>
> Greg

Again, my thanks. You guys are the best! :D

-- 
Don Wilde

* What is the Internet of Things but a system *
* of systems including humans? *

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Stefan Eßer
Am 26.06.20 um 12:23 schrieb Peter Jeremy:
> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>> Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>>
>> Device  1K-blocks UsedAvail Capacity
>> /dev/ada0s1b 335544320 33554432 0%
>> /dev/ada0s1d 335544320 33554432 0%
>> Total671088640 67108864 0%
> 
> I strongly suggest you don't have more than one swap device on spinning
> rust - the VM system will stripe I/O across the available devices and
> that will give particularly poor results when it has to seek between the
> partitions.

This used to be beneficial, when disk read and write bandwidth was
limited and whole processes had to be swapped in or out due to RAM
pressure. (This changed due to more RAM and a different ratio of
seek to transfer times for a given amount of data.)

An idea for a better strategy:

It might be better to use an allocation algorithm that assigns a
swap device to each running process that needs pages written to the
swap device and only assign another swap device (and use if from
then on for that process) if there is no free space left on the one
used until then.

Such a strategy would at least reduce the number of processes that
need all configured swap devices at the same time in a striped
configuration.

If all processes start with the first configured swap device assigned
to them, this will lead to only one of them being used until it fills
up, then progressing to the next one.

The strategy of whether the initial swap device assigned to a process
is always the first one configured in the system, or whether after
that could not be used by some process is moved on to the next one
(typically the one assigned to that process for further page-outs) is
not obvious to me.

The behavior could be controlled by a sysctl to allow to adapt the
strategy to the hardware (e.g. rotating vs. flash disks for swap).

> As a further piece of arcana, vm.pageout_oom_seq is a count that controls
> the number of passes before the pageout daemon gives up and starts killing
> processes when it can't free up enough RAM.  "out of swap space" messages
> generally mean that this number is too low, rather than there being a
> shortage of swap - particularly if your swap device is rather slow.

I'm not sure that this specific sysctl is documented in such a way
that it is easy to find by people suffering from out-of-memory kills.

Perhaps it could be mentioned as a parameter that may need tuning in
the OOM message?

And while it does not come up that often in the mail list, it might
be better for many kinds of application if the default was increased
(a longer wait for resources might be more acceptable than the loss
of all results of a long running computation).

Regards, STefan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Donald Wilde
On 6/26/20, Peter Jeremy  wrote:
> On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>>Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>>
>>Device  1K-blocks UsedAvail Capacity
>>/dev/ada0s1b 335544320 33554432 0%
>>/dev/ada0s1d 335544320 33554432 0%
>>Total671088640 67108864 0%
>
> I strongly suggest you don't have more than one swap device on spinning
> rust - the VM system will stripe I/O across the available devices and
> that will give particularly poor results when it has to seek between the
> partitions.

My intent is to make this machine function -- getting the bear
dancing. How deftly she dances is less important than that she dances
at all. My for-real boxen will have real HP and real cores and RAM.

>
> Also, you can't actually use 64GB swap with 4GB RAM.  If you look back
> through your boot messages, I expect you'll find messages like:
> warning: total configured swap (524288 pages) exceeds maximum recommended
> amount (498848 pages).
> warning: increase kern.maxswzone or reduce amount of swap.

Yes, as I posted, those were part of the failure stream from the synth
program. When I had kern.maxswzone increased, it got through boot
without complaining.

> or maybe:
> WARNING: reducing swap size to maximum of MB per unit

The warnings were there, in the as-it-failed complaints.

> The absolute limit on swap space is vm.swap_maxpages pages but the
> realistic
> limit is about half that.  By default the realistic limit is about 4×RAM
> (on
> 64-bit architectures), but this can be adjusted via kern.maxswzone (which
> defines the #bytes of RAM to allocate to swzone structures - the actual
> space allocated is vm.swzone).
>
> As a further piece of arcana, vm.pageout_oom_seq is a count that controls
> the number of passes before the pageout daemon gives up and starts killing
> processes when it can't free up enough RAM.  "out of swap space" messages
> generally mean that this number is too low, rather than there being a
> shortage of swap - particularly if your swap device is rather slow.
>
Thanks, Peter!
-- 
Don Wilde

* What is the Internet of Things but a system *
* of systems including humans? *

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: swap space issues

2020-06-26 Thread Peter Jeremy
On 2020-Jun-25 11:30:31 -0700, Donald Wilde  wrote:
>Here's 'pstat -s' on the i3 (which registers as cpu HAMMER):
>
>Device  1K-blocks UsedAvail Capacity
>/dev/ada0s1b 335544320 33554432 0%
>/dev/ada0s1d 335544320 33554432 0%
>Total671088640 67108864 0%

I strongly suggest you don't have more than one swap device on spinning
rust - the VM system will stripe I/O across the available devices and
that will give particularly poor results when it has to seek between the
partitions.

Also, you can't actually use 64GB swap with 4GB RAM.  If you look back
through your boot messages, I expect you'll find messages like:
warning: total configured swap (524288 pages) exceeds maximum recommended 
amount (498848 pages).
warning: increase kern.maxswzone or reduce amount of swap.
or maybe:
WARNING: reducing swap size to maximum of MB per unit

The absolute limit on swap space is vm.swap_maxpages pages but the realistic
limit is about half that.  By default the realistic limit is about 4×RAM (on
64-bit architectures), but this can be adjusted via kern.maxswzone (which
defines the #bytes of RAM to allocate to swzone structures - the actual
space allocated is vm.swzone).

As a further piece of arcana, vm.pageout_oom_seq is a count that controls
the number of passes before the pageout daemon gives up and starts killing
processes when it can't free up enough RAM.  "out of swap space" messages
generally mean that this number is too low, rather than there being a
shortage of swap - particularly if your swap device is rather slow.

-- 
Peter Jeremy


signature.asc
Description: PGP signature