Re: How to speed up 'automake'

2022-05-02 Thread Karl Berry
-   @echo '# dummy' >$@-t && $(am__mv) $@-t $@
+   @: >>$@

1) does it actually speed anything up?
2) without the mv I fear we are no longer noticing write failure. -k




Re: How to speed up 'automake'

2022-05-02 Thread Jan Engelhardt


On Monday 2022-05-02 15:31, Jan Engelhardt wrote:
>> If empty files are ok (assuming they are needed at all), can they be produced
>> with a minimum number of executions of a 'touch' command?
>
>Better yet, they can be produced with the ">>" shell builtin, skipping
>spawning the touch program. (We need to use >>, because > truncates,
>and configure does not truncate deps files for a good reason - doing
>so would be like `make clean`)

touch wasn't even used. but mv was, which should become redundant too.

(The use '>' with 'mv' may have to do with desires for atomicity
reasons when you do make -j)

diff --git a/lib/am/depend.am b/lib/am/depend.am
index ecc6180a6..3a493c3de 100644
--- a/lib/am/depend.am
+++ b/lib/am/depend.am
@@ -18,7 +18,7 @@ am__mv = mv -f
 
 $(am__depfiles_remade):
@$(MKDIR_P) $(@D)
-   @echo '# dummy' >$@-t && $(am__mv) $@-t $@
+   @: >>$@
 
 am--depfiles: $(am__depfiles_remade)
 .PHONY: am--depfiles



Re: How to speed up 'automake'

2022-05-02 Thread Thomas Jahns

On May 2, 2022, at 15:07 , Jan Engelhardt  wrote:
> 
> 
> On Monday 2022-05-02 14:20, Thomas Jahns wrote:
> Is there a way to speed 'automake' up?
 
 [...let] ephemeral builds [..] use /dev/shm [...]
>>> 
>>> There ought to be little difference [...] automake, that's nowhere near as
>>> IO-heavy as untarring kernel source archives. It's much more a CPU-bound
>>> task.
>> 
>> I found tmpfs to be faster when there were multiple smallish (less than an fs
>> block) writes to the same file, particularly by different programs. This may
>> be more important in the context of all autotools taken together than 
>> automake
>> alone. Also not all file systems take full advantage of all methods to 
>> prevent
>> the system from hitting disk like XFS does, i.e. results depend on what you
>> compare to.
> 
> But you're just acknowledging that the slowness comes from the _fs_, are you 
> not?

Yes, sure, I was expclity stating in my initial reply that using tmpfs might 
not be what the OP asked for but instead what might actually solve their 
problem of slow builds. I also like having programs make efficient use of 
system resources, but sometimes throwing machine resources at a problem can be 
the most appropriate course of action.

> Indeed, if a source code package consists of 1 files, then configure
> produces another 10k files for the stuff in the ".deps" directories.
> There is not much autotooling can do here, as I believe pregenerating
> those 10k files all with "# dummy" content is to support the least common
> demoniator of /usr/bin/make.
> 
> I wonder, rather than emitting those 8 bytes to .Po/.Plo/.Tpo/etc. files, 
> could
> we emit 0 bytes instead? Then filesystems would have to write only the inode
> and forego extent allocation for the data portion (and save some disk space
> too, as each 8-byte file in practice reserves something like 4K on
> non-packing/non-compressing filesystems).

That might be something to investigate: could autoconf detect GNU parallel and 
call config.status in some way that automatically creates multiple instances? 
Running config.status is usually not the longest part of an autoconf run but 
one that might be reasonably simple to parallelize.

Thomas




smime.p7s
Description: S/MIME cryptographic signature


Re: How to speed up 'automake'

2022-05-02 Thread Jan Engelhardt


On Monday 2022-05-02 15:18, Bob Friesenhahn wrote:
> On Mon, 2 May 2022, Jan Engelhardt wrote:
>>
>> Indeed, if a source code package consists of 1 files, then configure
>> produces another 10k files for the stuff in the ".deps" directories.
>> There is not much autotooling can do here, as I believe pregenerating
>> those 10k files all with "# dummy" content is to support the least common
>> demoniator of /usr/bin/make.
>
> In a configure script I see the option:
>
>  --disable-dependency-tracking
>  speeds up one-time build
>
> so what tangible benefits does --disable-dependency-tracking actually provide?

I remember that option. That disables .deps entirely or so.
Distributions, including openSUSE use it because they (barring
exceptional circumstances) do not modify (non-BUILT_SOURCES) source
files between two invocations of make.

Great for the ephmereal build root that was initially talked about,
not so cool for developers that switch between editor and compiler
every so often :)


> If empty files are ok (assuming they are needed at all), can they be produced
> with a minimum number of executions of a 'touch' command?

Better yet, they can be produced with the ">>" shell builtin, skipping
spawning the touch program. (We need to use >>, because > truncates,
and configure does not truncate deps files for a good reason - doing
so would be like `make clean`)



Re: How to speed up 'automake'

2022-05-02 Thread Bob Friesenhahn

On Mon, 2 May 2022, Jan Engelhardt wrote:


Indeed, if a source code package consists of 1 files, then configure
produces another 10k files for the stuff in the ".deps" directories.
There is not much autotooling can do here, as I believe pregenerating
those 10k files all with "# dummy" content is to support the least common
demoniator of /usr/bin/make.


In a configure script I see the option:

  --enable-dependency-tracking
  do not reject slow dependency extractors
  --disable-dependency-tracking
  speeds up one-time build

so what tangible benefits does --disable-dependency-tracking actually 
provide?


If empty files are ok (assuming they are needed at all), can they be 
produced with a minimum number of executions of a 'touch' command?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
Public Key, http://www.simplesystems.org/users/bfriesen/public-key.txt



Re: How to speed up 'automake'

2022-05-02 Thread Jan Engelhardt


On Monday 2022-05-02 14:20, Thomas Jahns wrote:
 Is there a way to speed 'automake' up?
>>> 
>>> [...let] ephemeral builds [..] use /dev/shm [...]
>> 
>> There ought to be little difference [...] automake, that's nowhere near as
>> IO-heavy as untarring kernel source archives. It's much more a CPU-bound
>> task.
>
>I found tmpfs to be faster when there were multiple smallish (less than an fs
>block) writes to the same file, particularly by different programs. This may
>be more important in the context of all autotools taken together than automake
>alone. Also not all file systems take full advantage of all methods to prevent
>the system from hitting disk like XFS does, i.e. results depend on what you
>compare to.

But you're just acknowledging that the slowness comes from the _fs_, are you 
not?

Anyway...

Indeed, if a source code package consists of 1 files, then configure
produces another 10k files for the stuff in the ".deps" directories.
There is not much autotooling can do here, as I believe pregenerating
those 10k files all with "# dummy" content is to support the least common
demoniator of /usr/bin/make.

I wonder, rather than emitting those 8 bytes to .Po/.Plo/.Tpo/etc. files, could
we emit 0 bytes instead? Then filesystems would have to write only the inode
and forego extent allocation for the data portion (and save some disk space
too, as each 8-byte file in practice reserves something like 4K on
non-packing/non-compressing filesystems).



Re: How to speed up 'automake'

2022-05-02 Thread Thomas Jahns
> On Apr 30, 2022, at 01:31 , Jan Engelhardt  wrote:
> 
> On Friday 2022-04-29 22:59, Thomas Jahns wrote:
>> On 4/27/22 3:49 PM, R. Diez wrote:
>>> Is there a way to speed 'automake' up?
>> 
>> While you are probably looking for system-independent advice, the best 
>> results
>> I've had with speeding up ephemeral builds is to simply use /dev/shm for
>> backing storage on Linux, i.e. first try to put build directories there
>> ($XDG_RUNTIME_DIR is also fine on modern Linux). If the installation is not
>> needed later on, you can also put the installation path there.
> 
> There ought to be little difference, both use the page cache, except
> that using tmpfs carries the usual volatility risks (not backed by a
> real device, susceptible to power loss, etc., blocks other serious
> processes from using resources, and tmpfs objects may get moved to
> swapspace, which isn't great at all considering you get to pick up
> pieces from the swap partition in the event of a power loss.)
> 
> tmpfs may be interesting from a psychological point of view and/or
> when there are a *lot* of files. But automake, that's nowhere near as
> IO-heavy as untarring kernel source archives. It's much more
> a CPU-bound task.

Very much depends on what the programs do: I found tmpfs to be faster when 
there were multiple smallish (less than an fs block) writes to the same file, 
particularly by different programs. This may be more important in the context 
of all autotools taken together than automake alone. Also not all file systems 
take full advantage of all methods to prevent the system from hitting disk like 
XFS does, i.e. results depend on what you compare to.

Thomas





smime.p7s
Description: S/MIME cryptographic signature