On Monday 2022-05-02 14:20, Thomas Jahns wrote:
>>>> Is there a way to speed 'automake' up?
>>> [...let] ephemeral builds [..] use /dev/shm [...]
>> There ought to be little difference [...] automake, that's nowhere near as
>> IO-heavy as untarring kernel source archives. It's much more a CPU-bound
>> task.
>I found tmpfs to be faster when there were multiple smallish (less than an fs
>block) writes to the same file, particularly by different programs. This may
>be more important in the context of all autotools taken together than automake
>alone. Also not all file systems take full advantage of all methods to prevent
>the system from hitting disk like XFS does, i.e. results depend on what you
>compare to.

But you're just acknowledging that the slowness comes from the _fs_, are you 


Indeed, if a source code package consists of 10000 files, then configure
produces another 10k files for the stuff in the ".deps" directories.
There is not much autotooling can do here, as I believe pregenerating
those 10k files all with "# dummy" content is to support the least common
demoniator of /usr/bin/make.

I wonder, rather than emitting those 8 bytes to .Po/.Plo/.Tpo/etc. files, could
we emit 0 bytes instead? Then filesystems would have to write only the inode
and forego extent allocation for the data portion (and save some disk space
too, as each 8-byte file in practice reserves something like 4K on
non-packing/non-compressing filesystems).

Reply via email to