Bruno Haible wrote:
Hi Jacob,
AFAIU, the 4x sleep 0.1 are to determine whether
am_cv_filesystem_timestamp_resolution should be set to 0.1 or to 1.
OK, so be it.
But the 6x sleep 1 are to determine whether
am_cv_filesystem_timestamp_resolution should be set to 1 or 2.
2 is known to be the case only for FAT/VFAT file systems. Therefore
here is a proposed patch to speed this up. On NetBSD, it reduces
the execution time of the test from ca. 7 seconds to ca. 0.5 seconds.
The problem with the proposed patch is that it tries to read a
filesystem name instead of testing for the feature. This would not be
portable to new systems that use a different name for their FAT
filesystem driver.
I can amend the patch so that it uses `uname -s` first, and does the
optimization only for the known systems (Linux, macOS, FreeBSD, NetBSD,
OpenBSD, Solaris).
This still has the same philosophical problem: testing for a known
system rather than for the feature we actually care about. (We could
also identify FAT with fair confidence by attempting to create a file
with a name containing a character not allowed on the FAT filesystem,
but I remember Linux having had at least one extended FAT driver
("umsdos" if I remember correctly) that lifted the name limits, but I do
not remember if it also provided improved timestamps.)
I think the test can be better optimized for the common case by first
checking if stat(1) from GNU coreutils is available ([[case `stat
--version` in *coreutils*) YES;; *) NO;; esac]])
Sure, if GNU coreutils 'stat -f' is available, things would be easy.
But typically, from macOS to Solaris, it isn't.
You can't achieve portability by using a highly unportable program
like 'stat'. That's why my patch only uses 'df' and 'mount'.
You can use anything in configure, *if* you first test for it and have a
fallback if it is not available. In this case, I am proposing testing
for 'stat -f', using it to examine conveniently-available timestamps to
establish an upper bound on timestamp granularity if we can, and falling
back to the current (slow) tests if not. Users of the GNU system will
definitely get the fast path.
and, if it is (common
case and definitely so on the GNU system), checking [[case `stat
--format=%y .` in *:??.000000000) SUBSEC_RESOLUTION=no;; *)
SUBSEC_RESOLUTION=yes;; esac]] to determine if sub-second timestamps are
likely to be available
I don't care much about the 0.4 seconds spent on determining sub-second
resolution. It's the 6 seconds that bug me.
If 'stat -f' is available, we should be able to cut that to
milliseconds. GNU systems will have 'stat -f', others might. The slow
path would remain available if the fast path cannot be used. Using a
direct feature test for 'stat -f' might motivate the *BSDs to also
support it.
To handle filesystems with 2-second timestamp resolution, check the
timestamp on configure, and arrange for autoconf to ensure that the
timestamp of a generated configure script is always odd
Since a tarball can be created on ext4 and unpacked on vfat FS,
That is exactly the situation I am anticipating here.
this would mean that autoconf needs to introduce a sleep() of up to
1 second, _regardless_ on which FS it is running. No, thank you,
that is not a good cure to the problem.
One second, once, when building configure, to ensure that configure will
have an odd timestamp... does autoconf normally complete in less than
one second? Would this actually increase the running time
significantly? Or, as Simon Richter mentioned, use the utime builtin
(Autoconf is now written in Perl) to advance the mtime of the created
file by one second before returning with no actual delay.
The bigger problem would be that it would be impossible to properly
package such a configure script if using a filesystem with 2-second
granularity. Such a configure script would always be unpacked with an
even timestamp (because it was packaged with an even timestamp) and the
2-second granularity test would give a false positive if the filesystem
actually has 1-second granularity, but configure itself was generated on
a 2-second granularity filesytem. The suggested tests for sub-second
granularity would still work correctly on the unpacked files,
however---if you can see non-zero fractional seconds in timestamps, you
know that you are not on a 2-second granularity filesystem.
Maybe the best answer is to test for subsecond timestamp granularity
first, and then only do the slow test to distinguish between 1-second
and 2-second granularity if the subsecond granularity test gives a
negative result? Most modern systems will have the subsecond timestamp
granularity, so would need only the 0.4 second test; older systems would
need the full 6.4 second test, but would still work reliably. At worst,
we might need to extend the 0.4 second test to 0.5 seconds, to confirm
that we did not just happen to straddle a filesystem timestamp tick.
We could also use the 'stat -f' test to gain partial information and
limit the slow path tests that need to be run. If 'stat -f' sees
non-zero fractional seconds, we have sub-second resolution and can
either use a value derived from observation or run (quick) tests. If
'stat -f' sees zero fractional seconds on all files, but at least one
file has an odd seconds field, we can assume one-second timestamp
granularity. If 'stat -f' sees zero fractional seconds on all files and
even timestamps on all files, we can either assume 2-second timestamp
granularity or run the existing tests to confirm the result. If 'stat
-f' is not available, we can simply run the existing tests, which can
probably be re-arranged to detect sub-second timestamp granularity in
0.4 or 0.5 seconds, but will need the 6.4 or 6.5 seconds to resolve
1-second or 2-second granularity.
-- Jacob