In looking for why the builders used notable tmpfs space
even when only one builder was left that was active,
I discovered that, for example, each builder ends up
with its own copy of /usr/local/poudriere/data/.m/*/*/usr/
(and more) that does not end up being cleared out while
the builder is inactive. This looked to be a systematic
contribution to the tmpfs usage during times when various
builders are inactive.
df -m /usr/local/poudriere/data/.m/main-aarch64-default/*/*/ | sed -e
's@/[0-9][0-9]/@/*/@' | sort -k1,1 -k6,6 -k3,3 -k1,6 -u
/usr/local/poudriere/data/.m/main-aarch64-default/ref/rescue 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/*/rescue
/usr/local/poudriere/data/packages/main-aarch64-default/.building 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/*/packages
/usr/local/poudriere/data/packages/main-aarch64-default/.building 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/ref/packages
/usr/local/poudriere/jails/main-aarch64/rescue 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/ref/rescue
/usr/ports/distfiles 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/*/distfiles
/usr/ports/distfiles 1114846
498917 526741 49%
/usr/local/poudriere/data/.m/main-aarch64-default/ref/distfiles
Filesystem 1M-blocks
Used Avail Capacity Mounted on
devfs 0
0 0 0% /usr/local/poudriere/data/.m/main-aarch64-default/*/dev
devfs 0
0 0 0% /usr/local/poudriere/data/.m/main-aarch64-default/ref/dev
procfs 0
0 0 0% /usr/local/poudriere/data/.m/main-aarch64-default/*/proc
procfs 0
0 0 0% /usr/local/poudriere/data/.m/main-aarch64-default/ref/proc
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/01
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/02
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/03
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/04
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/05
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/06
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/07
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/08
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/09
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/10
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/11
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/12
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/13
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/14
tmpfs 66539
1140 65398 2% /usr/local/poudriere/data/.m/main-aarch64-default/ref
For the above, only ref/ and one other were active at the time.
Imagine having 32 builders or 128 builders or even more with
1140 MiBytes for each inactive one. For the, above all the
builders actually reported:
Inspecting . . .: determining shlib requirements
for each package build it tried. No actual builds were done.
[I'll note that the world directory trees for the booted
system and for the poudriere jail are from official
PkgBase builds that were installed. Also, the system
is UFS based, not ZFS based.]
For reference (note the use of -x):
# du -xsAm /usr/local/poudriere/data/.m/main-*-default/*/[uv][sa]r/ | sed -e
's@/[0-9][0-9]/@/*/@' | sort -k1,2 -u
376 /usr/local/poudriere/data/.m/main-aarch64-default/*/var/
376 /usr/local/poudriere/data/.m/main-aarch64-default/ref/var/
713 /usr/local/poudriere/data/.m/main-aarch64-default/*/usr/
713 /usr/local/poudriere/data/.m/main-aarch64-default/ref/usr/
So 1089 MiBytes or so of the 1140 MiBytes for each such tmpfs
is contributed overall by the combination of var/ and usr/ for
the example.
# poudriere jail -l
JAILNAME VERSION OSVERSION ARCH METHOD TIMESTAMP
PATH
. . .
main-aarch64 15.0-CURRENT aarch64 pkgbase 2025-02-12
22:49:27 /usr/local/poudriere/jails/main-aarch64
. . .
There is more to look into here, in that, historically, larger
builders leave behind larger tmpfs usage until the next builder
reuse (if any). Having a few of lang/rust , devel/llvm20 , etc.
finish, but those builders not starting something new for a
notable time, can lead to huge RAM+SWAPSPACE usage for those
inactive builders for that time for USE_TMPFS=all without
TMPFS_BLACKLIST= in use.
===
Mark Millard
marklmi at yahoo.com