On Oct 18, 2025, at 11:23, Mark Millard <[email protected]> wrote:

> On Oct 18, 2025, at 10:43, Mark Millard <[email protected]> wrote:
> 
>> void <void_at_f-m.fm> wrote on
>> Date: Sat, 18 Oct 2025 12:43:07 UTC :
>> 
>>> On Fri, Oct 17, 2025 at 09:21:06PM -0700, Mark Millard wrote:
>>>> 
>>>> At this point stable/15 and a non-debug main 16 are not all that
>>>> different. So I attempted builds of what I had for a ports tree
>>>> (from Oct 13) and then updating the ports tree and rebuilding
>>>> what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
>>>> normal environment and poudriere-devel use.
>>>> 
>>>> Neither failed.
>>>> 
>>>> But you give little configuration information so I do not
>>>> know how well my attempt approximated your context:
>>> 
>>> Point taken but at that stage I only wanted to know if others 
>>> could build it, because I couldnt on multiple poudrieres and
>>> it had/has not yet (2025.10.18-1224 UTC) been built on the pkg cluster.
>>> Now that I know it can be built, I partially know where to look, and
>>> I avoid making a PR for the port.
>> 
>> Do you use anything like:
>> 
>> # Delay when persistent low free RAM leads to
>> # Out Of Memory killing of processes:
>> vm.pageout_oom_seq=120
>> 
>> Or:
>> 
>> #
>> # For plunty of swap/paging space (will not
>> # run out), avoid pageout delays leading to
>> # Out Of Memory killing of processes:
>> #vm.pfault_oom_attempts=-1
>> #
>> # For possibly insufficient swap/paging space
>> # (might run out), increase the pageout delay
>> # that leads to Out Of Memory killing of
>> # processes (showing defaults at the time):
>> #vm.pfault_oom_attempts= 3
>> #vm.pfault_oom_wait= 10
>> 
>> (Mine are in /boot/loader.conf .)
>> 
>>>> RAM+SWAP == ??? + ??? == ???
>>> 128+4 == 132GB
>> 
>> Note that with USE_TMPFS=all but TMPFS_BLACKLIST extensively
>> used to avoid tmpfs use for port-packakge with huge file
>> system requirements, I reported for the initial build:
>> 
>> QUOTE
>> So: Somewhere between 132624 MiBytes and 143875 MiBytes or
>>   so was sufficient RAM+SWAP, all RAM here.
>> END QUOTE
>> 
>> But that was for 32 FreeBSD cpus, not 20. Still, the file
>> system usage contribution to RAM+SWAP usage for when tmpfs
>> is in full use tends to not be all that dependent on the
>> FreeBSD cpu count.
>> 
>> Converting my figures to GiBytes:
>> 
>> 132624 MiBytes is a little under 129.6 GiBytes
>> 143875 MiBytes is a little under 140.6 GiBytes
>> 
>> The range is that wide based, in part, on
>> lack of significant memory pressure, given the
>> 192 GiByes of RAM. When SWAP is is significantly
>> involved gives much better information about
>> RAM+SWAP requirements because of the memory
>> pressure consequences. So I'd not infer that
>> much from the above.
>> 
>> I can boot the system using hw.physmem="128G"
>> in /boot/loader.conf. I'll probably get a SWAP
>> binding warning about 512 GiBytes of SWAP
>> being a potential mistuning for that amount of
>> RAM. (More like 474 GiBytes of SWAP would likely
>> not complain for 128 GiBytes of RAM.)
>> 
>> I can disable my TMPFS_BLACKLIST list.
>> 
>> I can constrain to use of PARALLEL_JOBS=20 and
>> have MAKE_JOBS_NUMBER_LIMIT=20 for
>> ALLOW_MAKE_JOBS use. But attempting to have it
>> actually avoid 12 of the 32 FreeBSD cpus would
>> probably be messier and I've no experience with
>> any known-effective way of doing that for bulk
>> runs. So I may well not deal that issue and
>> just let it use up to the 32. This makes
>> judging load average implications dependent
>> on the 32.
>> 
>> Also, this build would not have prior builds
>> of some of the port-packages. (Nothing would
>> end up with "inspected" status.)
>> 
>> So I may later have better information for
>> comparison, including for RAM+SWAP use.
>> 
>>> The problem happened on two systems. For simplicity I'm talking about the
>>> beefier system. It has 20 cpu (40 with HT on but it is turned off)
>>> and 128GB ram. configured swap is 4GB and hardly used.
>>> 
>>>> poudriere.conf :
>>>> USE_TMPFS=???
>>> all
>>> 
>>>> TMPFS_BLACKLIST=???
>>> not defined
>>> 
>>>> PARALLEL_JOBS=??? # (or command line control of such)
>>> 1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o 
>>> but -J20 was also tried directly on the command line
>>> 
>>>> ALLOW_MAKE_JOBS=??? # (defined vs. not)
>>> yes
>>> 
>>>> ALLOW_MAKE_JOBS_PACKAGES=???
>>> undefined
>>> 
>>>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???
>>> "llvm* rust* gcc*"
>>> 
>>>> PRIORITY_BOOST=???
>>> undefined
>>> 
>>>> other relevant possibilities?
>>>> 
>>>> make.conf (or command line control of such):
>>>> MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
>>> not defined within jailname-make.conf
>>> 
>>>> Details for my context . . .
>>> 
>>> Thank you for your time in this. I'm interested - do you
>>> make available your hacked version of top? Could be useful!
>> 
>> I'll deal with top separately. I've not been doing
>> source based activities for months and likely
>> should get my context for such up to date first.
> 
> I forgot to ask about the non-tmpfs file system(s):
> ZFS? UFS? Any tuning of note?
> 
> My prior tests I reported on were done in a ZFS
> context, although just on a single partition: it
> is ZFS just to have bectl use as far as why goes,
> not for redundancy or other typical ZFS usage. The
> only tuning is:
> 
> /etc/sysctl.conf:vfs.zfs.vdev.min_auto_ashift=12
> /etc/sysctl.conf:vfs.zfs.per_txg_dirty_frees_percent=5
> 
> The use of "5" instead of "30" was as recommended
> by the person that changed the default to 30. It was
> for some behavior that I reported for a specific
> context, but the 5 seemed to not be a problem for me
> for any context I had so I've used it systematically
> since then. 5 was the prior default, as I remember.

Another thing I did not ask about was other competing
use of the system while the bulk build was in progress.
My context is sshd (no GUI) and basically nothing else
much going on. This was a "bulk -c" so everything was
built.


As for the build: It worked fine. The use of
PARALLEL_JOBS=20 and MAKE_JOBS_NUMBER_LIMIT=20 did
limit the memory space usage compared to my first
test.

. . .
[05:09:17] [01] [00:02:57] Finished   net-im/signal-desktop | 
signal-desktop-7.74.0: Success
[05:09:18] Stopping 20 builders
[05:09:18] Creating pkg repository
Creating repository in /tmp/packages: 100%
Packing files for repository: 100%
. . .
[05:09:19] Committing packages to repository: 
/usr/local/poudriere/data/packages/main-min-devel-lib32-amd64-default/.real_1760834735
 via .latest symlink
[05:09:19] [main-min-devel-lib32-amd64-default] [2025-10-18_12h36m16s] 
[committing] Time: 05:09:17
           Queued: 369 Inspected: 0 Ignored: 0 Built: 369 Failed: 0 Skipped: 0 
Fetched: 0 Remaining: 0
. . .

Maximums Observered figures:
Note: 0 < SwapUsed was never observed by my hacked variation of top.
MaxObs(Act+Lndry+SwapUsed), 67893Mi MaxObs(A+Wir+L+SU), 99751Mi (A+W+L+SU+InAct)

So it fit inside the 128 GiByte RAM space just fine:

Somewhere between 67893 MiBytes and 99751 MiBytes needed.
(No significant memory pressure. But when the load was
over 20, up to 32 FreeBSD cpus might have been active,
not 20 or less.)


An example from electron37 towards the end of its run:

[05:02:21] [main-min-devel-lib32-amd64-default] [2025-10-18_12h36m16s] 
[parallel_build] Time: 05:02:19
           Queued: 369 Inspected: 0 Ignored: 0 Built: 367 Failed: 0 Skipped: 0 
Fetched: 0 Remaining: 2
 ID  TOTAL              ORIGIN   PKGNAME           PHASE TIME     TMPFS       
CPU% MEM%
[01] 01:31:09 devel/electron37 | electron37-37.6.1 build 01:28:46 35.58 GiB 
431.6% 7.4%

Note the TMPFS.


For reference (after it had completed, top having been started before
the bulk run):

last pid: 58821;  load averages:    0.04,    1.01,    6.85 MaxObs:   68.84,   
57.93,   49.32                                                                  
                  up 0+06:08:18  17:56:20
1204 threads:  33 running, 1057 sleeping, 114 waiting, 119 MaxObsRunning
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 9252Ki Active, 149712Ki Inact, 233472B Laundry, 17115Mi Wired, 57344B Buf, 
108400Mi Free, 59254Mi MaxObsActive, 17128Mi MaxObsWired, 67893Mi 
MaxObs(Act+Wir+Lndry)
ARC: 12422Mi Total, 2785Mi MFU, 8957Mi MRU, 270336B Anon, 41096Ki Header, 
651201Ki Other
     10903Mi Compressed, 12437Mi Uncompressed, 1.14:1 Ratio
Swap: 524288Mi Total, 524288Mi Free, 59254Mi MaxObs(Act+Lndry+SwapUsed), 
67893Mi MaxObs(A+Wir+L+SU), 99751Mi (A+W+L+SU+InAct)


Note: The "MaxObs" load averages are each at independent times.

As you can see, my hack uses long lines. I normally have 200 characater
wide windows for my ssh sessions.


===
Mark Millard
marklmi at yahoo.com


Reply via email to