Re: C23 support in Autoconf

2024-05-01 Thread Nick Bowler
On 2024-04-30 21:59, Jacob Bachmeyer wrote:
> Paul Eggert wrote:
>> While we're adding to our wishlist that should also be a
>> configure-time option, not merely something in configure.ac. That
>> way, one could test a tarball's portability without having to modify
>> the source code.

This is already possible.  Just configure with CFLAGS=-std=c99 or
whatever.

> Perhaps --with-C-language-standard={C89,C99,C11,C23} and a
> --with-strict-C-language to select (example) c99 instead of gnu99?

In my opinion we should not add options like this to every package.
Unless I'm missing something, outside of some rare edge cases, a user
actually using such an option is going to achieve nothing at best and
otherwise change a package from "working" into "not working."  It is
not helpful to users to present them with useless options.

The realistic edge cases (such as the many historical GNU packages which
were broken by GCC's change of defaults to gnu11) can already be handled
by CFLAGS.

Cheers,
  Nick



Re: autoreconf --force seemingly does not forcibly update everything

2024-04-10 Thread Nick Bowler
On 2024-04-10 16:36, Simon Josefsson wrote:
> Is bootstrap intended to be reliable from within a tarball?  I thought
> the bootstrap script was not included in tarballs because it wasn't
> designed to be ran that way, and the way it is designed may not give
> expected results.  Has this changed, so we should recommend maintainers
> to 'EXTRA_DIST = bootstrap bootstrap-funclib.sh bootstrap.conf' so this
> is even possible?

Not including the scripts used to build configure in a source tarball
is a mistake, particularly for a GPL-licensed package.  The configure
script itself is clearly object code, and the GPL defines corresponding
source to include any "scripts to control [its generation]".

If you cannot successfully regenerate configure from a source tarball,
it is probably missing some of the source code for the configure script,
which is a problem if it was supposed to be a free software package.

Cheers,
  Nick



Re: autoreconf --force seemingly does not forcibly update everything

2024-04-09 Thread Nick Bowler
On 2024-04-09 18:06, Sam James wrote:
> Nick poses that a specific combination of tools is what is tested and
> anything else invalidates it. But how does this work when building on
> a system that was never tested on, or with different flags, or a
> different toolchain?
>
> It's reasonable to say "look, if you do this, please both state it
> clearly and also do some investigation first to see if you can
> reproduce it with my macros", but I don't think it's a crime for
> someone to attempt it either.

To be clear, I don't mean to suggest that modifying a package by
replacing m4 sources with different versions and/or regenerating
configure with a different version of Autoconf is something that
should never be done by downstream distributors.  If doing this
solves some particular problem, then by all means do it, that's
an important part of what free software is all about.

What I have a problem with is the suggestion that distributors should
systematically throw away actually-tested configure scripts by just
discarding any m4 source files that appear to be copied from another
project (Gnulib, in this case), copying in new ones from a possibly
different version of that project, regenerating the configure script
using a possibly different version of Autoconf, and then expecting
that this process will produce high-quality results.

Cheers,
  Nick



Re: autoreconf --force seemingly does not forcibly update everything

2024-04-01 Thread Nick Bowler
On 2024-04-01 16:43, Guillem Jover wrote:
> But if as a downstream distribution I explicitly request everything
> to be considered obsolete via --force, then I really do want to get
> whatever is in the system instead of in the upstream package.

If I distribute a release package, what I have tested is exactly what is
in that package.  If you start replacing different versions of m4 macros,
or use some distribution-patched autoconf/automake/libtool or whatever,
then this you have invalidated any and all release testing.

This is fine, modifying a package and distributing modified versions
are freedoms 1 and 3, but if it breaks you keep both pieces.

The aclocal --install feature should be seen as a feature to help update
dependencies as part of the process of preparing a modified version, not
something that should ever be routinely performed by system integrators.

GNU/Linux distributions have a long history of buggy backports to the
autotools.  For a recent example, Gentoo shipped a broken libtool 2.4.6
which included a patch to make Gentoo installs go faster but if you
prepared a package with this broken libtool version, the resulting
package would not build on HP-UX, oops.

Cheers,
  Nick



Re: Is it safe to use ax_gcc_builtin to detect __builtin_offsetof?

2024-02-25 Thread Nick Bowler
On 2024-02-25 02:28, Jeffrey Walton wrote:
> On Sun, Feb 25, 2024 at 2:09 AM Jeffrey Walton  wrote:
>> The page 
>> does not list __builtin_offsetof in the list of documented builtins.
>> But the page says "Unsupported built-ins will be tested with an empty
>> parameter set and the result of the check might be wrong or
>> meaningless so use with care."
>>
>> Is it safe to use ax_gcc_builtin to detect __builtin_offsetof?

This is the Autoconf list.  Despite the similar name, Autoconf Archive
is a separate project, and the author(s) of that macro (who would be the
people who can actually answer this question) might not be on this list.

Personally, if I really needed to probe the whether __builtin_offsetof
works then the Autoconf-provided AC_COMPUTE_INT macro should be more than
good enough to do this.  Something like (untested):

  AC_CACHE_CHECK([for __builtin_offsetof], [my_cv_builtin_offsetof],
  [AC_COMPUTE_INT([testval],
[__builtin_offsetof(struct foo, b) == (char *) - (char *)],
[ struct foo { char a; int b; } bar;],
[testval=0])
  AS_CASE([$testval],
[1], [my_cv_builtin_offsetof=yes],
[my_cv_builtin_offsetof=no])])

  AS_CASE([$my_cv_builtin_offsetof],
[yes], [AC_DEFINE([HAVE___BUILTIN_OFFSETOF], [1],
  [Define to 1 if __builtin_offsetof works])])

Test on both a system that supports __builtin_offsetof and one that
doesn't, this will probably be good enough unless you have specific
knowledge of systems that are buggy or different in some way.

Hope that helps,
  Nick



Re: [PATCH] Add quotes in AS_IF test for gid_t

2024-02-06 Thread Nick Bowler
On 2024-02-07 00:54, Nick Bowler wrote:
> On 2024-02-07 00:37, Paul Eggert wrote:
>> On 2024-02-06 20:37, Nick Bowler wrote:
>>> The right place to fix this problem is in Emacs.
>>
>> I don't see this problem in current (bleeding-edge Savannah) Emacs.
>> Sam, which Emacs are you talking about?
> 
> The issue is still present on emacs git master as far as I can see.
> 
> You have to be (re)generating the configure script with Autoconf 2.72.
> The actual emacs 29.2 release bundle ships with a configure script
> built with Autoconf 2.71, which has a different implementation of
> AC_FUNC_GETGROUPS (one that does not use ac_cv_type_gid_t internally).
 
 typo, meant to write AC_TYPE_GETGROUPS

Cheers,
  Nick



Re: [PATCH] Add quotes in AS_IF test for gid_t

2024-02-06 Thread Nick Bowler
On 2024-02-07 00:37, Paul Eggert wrote:
> On 2024-02-06 20:37, Nick Bowler wrote:
>> The right place to fix this problem is in Emacs.
> 
> I don't see this problem in current (bleeding-edge Savannah) Emacs.
> Sam, which Emacs are you talking about?

The issue is still present on emacs git master as far as I can see.

You have to be (re)generating the configure script with Autoconf 2.72.
The actual emacs 29.2 release bundle ships with a configure script
built with Autoconf 2.71, which has a different implementation of
AC_FUNC_GETGROUPS (one that does not use ac_cv_type_gid_t internally).

Cheers,
  Nick



Re: [PATCH] Add quotes in AS_IF test for gid_t

2024-02-06 Thread Nick Bowler
On 2024-02-06 22:33, Sam James wrote:
> Noticed when building Emacs:
> ```
> * checking type of array argument to getgroups... ./configure: 42782: test: 
> =: unexpected operator
> ```
> This turns out to be because of missing quotes in AS_IF for
> ac_cv_type_gid_t in AC_TYPE_GETGROUPS.

No, I don't think this is the right fix.  The lack of shell quotation is
not the cause of this problem.

In the AC_TYPE_GETGROUPS macro, ac_cv_type_gid_t should not be empty,
because Autoconf has code to assign this variable to a nonempty value.
The fact that it is suggests the problem is elsewhere.

Oh look, I see this line in emacs-29.2/configure.ac:

  AC_DEFUN([AC_TYPE_UID_T])

This is the actual cause of the problem, because AC_TYPE_UID_T is the
part of Autoconf that would have assigned this variable.  Since Emacs
has deleted its definition, it has therefore broken other Autoconf
macros (like AC_TYPE_GETGROUPS) which depend on it.

I don't think it's right to work around damage like this in Autoconf.

The right place to fix this problem is in Emacs.

Cheers,
  Nick



Re: [sr #111014] autoheader doesn't work with AC_DEFINE_UNQUOTED

2024-01-30 Thread Nick Bowler
On 2024-01-30 21:47, anonymous wrote:
> If you use AC_DEFINE_UNQUOTED to define symbols whose names are
> expanded from variables, autoheader won't emit them to the template
> header file. Later when you run configure, these symbols won't get
> defined because the matching define/undef line that is looked for
> during substitution doesn't exist.

Autoheader can't possibly know in advance what text some arbitrary
shell expansion might contain when the configure script is executed.
So, as documented[1], it only works when the first argument to
AC_DEFINE_UNQUOTED is a "literal" (essentially, something that
looks like it will be unchanged by shell quotation).

You can use AH_TEMPLATE[2] to instruct autoheader about any template
lines that it should include which are not determined automatically.

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/autoconf.html#autoheader-Invocation
[2] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.72/autoconf.html#Autoheader-Macros

Hope that helps,
  Nick



Re: Embed newlines in AC_MSG_* output

2024-01-27 Thread Nick Bowler
Hi,

On 2024-01-27 15:46, Dave Hart wrote:
> What is the right way to embed newlines in AC_MSG_FAIL or other AC_MSG_*
> macros?
[...]
> I tried:
> 
> AC_MSG_FAILURE(
> [--enable-openssl-random was used but no suitable SSL library was\
> found.  Remove --enable-openssl-random if you wish to build without\
> a cryptographically secure RNG.\
> WARNING: Use of ntp-keygen without a secure RNG may generate keys\
> that are predictable.])

Just don't end your lines with backslashes (which causes the shell to
delete the following newline) and AC_MSG_FAILURE will give you multiple
lines of output.  Normally you'll want the first line to be somewhat
shorter (since configure will insert a prefix in front of the message).

Hope that helps,
  Nick



Re: AC_TYPE_LONG_LONG_INT busted in current Autoconf

2024-01-16 Thread Nick Bowler
On 2024-01-16 23:22, Paul Eggert wrote:
> Thanks for reporting that bug. I installed the attached on Savannah; please 
> give it a try.

I applied the patch on top of the 2.72 release.  You definitely need to also 
remove the ;;
on the previous lines, every shell I try (new and old) now barfs on the case 
statement,
e.g.,

  % ./configure   
  [...]
  checking for unsigned long long int... ./configure[3221]: syntax error at 
line 3229 : `confdefs.h' unexpected

If I remove the ;; too it seems to be working.

Thanks,
  Nick



AC_TYPE_LONG_LONG_INT busted in current Autoconf

2024-01-16 Thread Nick Bowler
Hi,

In recent versions of Autoconf, AC_TYPE_LONG_LONG_INT (and
AC_TYPE_UNSIGNED_LONG_LONG_INT) wrongly indicate that (unsigned) long
long is supported on compilers which actually do not support it.

Looking at the implementation of AC_TYPE_LONG_LONG_INT, it contains:

  ac_cv_type_long_long_int=yes
  case $ac_prog_cc_stdc in
  no | c89) ;;
  *) [ run compiler probes ]
  esac

which looks like it is assuming "long long" is supported for
pre-standard and C89 compilers, only actually running checks
on compilers which support newer C standards; presumably this
is the opposite of what is intended.

The failure is easy to demonstrate with gcc:

  % cat >configure.ac <<'EOF
  AC_INIT([test], [0])
  AC_PROG_CC
  AC_TYPE_LONG_LONG_INT
  AC_OUTPUT
EOF
  
  % autoconf-2.72 --force
  % ./configure CC='gcc -Werror=long-long'
  [...]
  checking for gcc -Werror=long-long option to enable C11 features... 
unsupported
  checking for gcc -Werror=long-long option to enable C99 features... 
unsupported
  checking for gcc -Werror=long-long option to enable C89 features... none 
needed
  checking for unsigned long long int... yes
  checking for long long int... yes

The last apparently working version is autoconf-2.70, but that is
because this version of autoconf has a different bug which causes it to
believe C89 compilers support C99 (so the long long test is run in this
version).  The last actually working version is autoconf-2.69:

  % autoconf-2.69 --force
  % ./configure CC='gcc -Werror=long-long'
  [...]
  checking for gcc -Werror=long-long option to accept ISO C89... none needed
  checking for unsigned long long int... no
  checking for long long int... no

Cheers,
  Nick



Re: [sr #111007] autoconf 2.72 warning: file 'version.m4' included several times

2024-01-13 Thread Nick Bowler
On 2024-01-13 03:26, Румен Петров wrote:
> autoconf 2.72 is first release that prints warning:
> configure.ac:2: warning: file 'version.m4' included several times

The warning here is erroneous and happens now because Autoconf-2.72's
m4sugar.m4 (which is used under the hood basically everywhere) now
includes an expansion of m4_sinclude([version.m4]).

Autoconf implements this warning with a very simplisic check for whether
a file is actually included multiple times: it defines the m4_include
and m4_sinclude macros which record the argument any time they are used,
checking if either was ever called with that argument before.

Now, m4sugar does not _actually_ include your version.m4, because the
Autoconf build/installation process generates an m4 "frozen state" file
(m4sugar.m4f) where the file inclusion is already done using version.m4
from Autoconf's source code, and this is what actually gets used when
you run autoconf.  However, the frozen state *does* include the record
that m4_sinclude was expanded previously with the version.m4 argument.

Probably we could fix this problem by changing m4sugar.m4 to not use
the m4_sinclude.  It could use m4_builtin([sinclude], [version.m4])
instead which would then not expose the record of internal inclusions
to the user like this.

To work around the warning in autoconf-2.72, you can change the spelling
of version.m4 to something functionally equivalent, for example:

  m4_include([./version.m4])

You can also just go in and delete the indication that Autoconf uses
to produce this warning, for example:

  m4_builtin([undefine], [m4_include(version.m4)])

Incidentally, while not relevant to your example, Autoconf 2.72 also
installs its own version.m4 file to the global m4 include search path,
so if you were previously using M4PATH or autoconf's -I option to locate
a file with this name then actually m4_include([version.m4]) will pick
up the one shipped with Autoconf instead of what probably anyone would
actually want to happen in this scenario.

Cheers,
  Nick



Re: AT_MTIME_DELAY not working?

2023-12-22 Thread Nick Bowler
On 2023-12-22 09:28, Zack Weinberg wrote:
> On Thu, Dec 21, 2023, at 10:07 PM, Jacob Bachmeyer wrote:
[...]
>> I suggest revising AT_MTIME_DELAY to actually create two files and
>> loop touching one of them until the timestamps differ.
> 
> This won’t work, because whether *test* thinks two timestamps differ
> may be different from whether *autom4te* thinks two timestamps differ
> (due to the whole mess with Time::HiRes not necessarily being
> available, timestamps getting rounded to the nearest IEEE double,
> etc).  Also, test -nt isn’t portable, we’d have to do the same
> mess with ls -t that’s in the code setting at_ts_resolution.

Since for the purpose of testing autom4te behaviour one should be able
to assume autom4te is available, a solution for this issue would be to
simply add a mechanism to autom4te (or find a creative way to do it
with existing autom4te) which compares two file timestamps, and use
that in the loop.

Cheers,
  Nick



Re: [GNU Autoconf 2.72e] testsuite: 11 119 261 failed on Solaris 11.4 x86

2023-12-21 Thread Nick Bowler
On 2023-12-21 19:26, Paul Eggert wrote:
> On 2023-12-21 13:19, Zack Weinberg wrote:
>> Sorry, I'm with GNU here: failure to report errors on writing to
>> stdout is a bug.  No excuses will be accepted.
> 
> Agreed. printf commands that silently succeed when they can't do the
> requested action are simply broken.

I tested several modern, current operating systems, including:
  OpenBSD 7, NetBSD 9, FreeBSD 13, Alpine Linux 3.15
I also tested several not-so-modern systems, including:
  DJGPP, HP-UX 11, Solaris 8.

On every single one of these systems, the /usr/bin/printf (or equivalent)
does not generally diagnose errors that occur when writing to standard
output, and an exit status of 0 is returned.

Further notes:

The shell on FreeBSD has a printf builtin which does diagnose such
errors and does exit with a nonzero status.

The shell on Alpine has a printf builtin which does not diagnose such
errors but does exit with a nonzero status.

The shell on NetBSD has a printf builtin which does not diagnose such
errors and exits with a 0 status.

The DJGPP bash shell has a printf builtin which does not diagnose such
errors and exits with a 0 status.

The version of bash that comes with Solaris 8 has a printf builtin which
does not diagnose such errors and exits with a 0 status.

The other systems tested do not have printf builtins in their shells, so
plain "printf" invokes "simply broken" /usr/bin/printf.

So sure, we can call it a "bug" and we can call these systems "simply
broken" but the reality is that these systems exist and portability means
dealing with this behaviour even if it is not what we wish they would do
or what some piece of paper says they should do.

There are probably a lot more systems with a "simply broken" printf,
as the printf utilities in SVr4 and 4.4BSD will also behave like this...

Cheers,
  Nick



Re: [GNU Autoconf 2.72e] testsuite: 11 119 261 failed on Solaris 11.4 x86

2023-12-21 Thread Nick Bowler
On 2023-12-21 19:34, Paul Eggert wrote:
>   ulimit -f 0
>   trap "" XFSZ
>   printf "test" >test || echo failed with status $?
> 
> which issues the following diagnostics on Solaris 10 /bin/sh:
> 
>   printf: write error: File too large
>   failed with status 1

I think you might want to double check your test setup.  This error
message is exactly what you'd get if you are running printf from a
recent release of GNU coreutils, rather than the /usr/bin/printf
that comes with Solaris.

I don't have a Solaris 10 box handy for testing right now but neither
Solaris 8 /usr/bin/printf nor heirloom-tools printf (which is ported
from OpenSolaris, contemporaneous with Solaris 10) print this error
message, and neither exit with status 1.

Cheers,
  Nick



Re: [GNU Autoconf 2.72e] testsuite: 11 119 261 failed on Solaris 11.4 x86

2023-12-21 Thread Nick Bowler

On 2023-12-21 13:48, Zack Weinberg wrote:
> and an unsuccessful exit status.  I would guess that on your machine
> the printf built-in and/or standalone printf executable are not
> reporting write errors.

I think it is not reasonable to expect any utility to report any kind
of error on writes to standard output.  This is not normal behaviour
for C programs and, in the case of printf utilities which are not shell
builtins, such behaviour is likely unique to GNU.

Without /dev/full it is difficult to portably trigger write errors,
but NetBSD 9 has /dev/full and its /usr/bin/printf also does not
report such errors.

You can more reliably get errors by redirecting to a pipe and closing
the read end. since in this case the SIGPIPE will not go unnoticed,
(although an ignored SIGPIPE is inherited from the parent process which
may affect the results).  This can be tricky to setup in a shell script
though.

Cheers,
  Nick



Re: [sr #110554] AC_CONFIG_HEADERS doesn't work properly for files with Windows line-endings

2023-12-15 Thread Nick Bowler
On 2023-12-15 11:19, Zack Weinberg wrote:
[...]
> old non-GNU implementations of awk probably don't support using a
> regexp as the third argument to 'split'.  (I'm not 100% sure about
> this; the gawk manual is usually very good about pointing out
> portability issues but in this case it's ambiguous.

This fact is also explicitly mentioned in the Autoconf documentation[1]:

  In code portable to both traditional and modern Awk, FS must be a
  string containing just one ordinary character, and similarly for
  the field-separator argument to split.

The heirloom-tools package[2] includes a traditional awk with this
particular limitation.  This is a free software implementation (ported
from OpenSolaris) that can be installed on a modern GNU/Linux system.
It should be very similar to Solaris 10 /bin/awk:

  % echo abcdefg | heirloom-tools/5bin/awk '{ split($1, a, /c?e/); print a[2]; 
}'
  awk: syntax error near line 1

  % echo abcdefg | heirloom-tools/5bin/awk '{ split($1, a, "c?e"); print a[2]; 
}'
  defg

Even with AC_PROG_AWK, on ULTRIX it selects "nawk" which still does not
work with a regexp argument to split, but it does work if the argument
is a string which can be interpreted as a regexp:

  ultrix% echo abcdefg | nawk '{ split($1, a, /c?e/); print a[2]; }'
  [no output]

  ultrix% echo abcdefg | nawk '{ split($1, a, "c?e"); print a[2]; }'
  fg

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.71/autoconf.html#Limitations-of-Usual-Tools
[2] https://heirloom.sourceforge.net/tools.html

Cheers,
  Nick



Re: Yet another license clarification

2023-12-12 Thread Nick Bowler
On 2023-12-12 14:05, Paul Eggert wrote:
> On 12/12/23 06:36, Sergey Kosukhin wrote:
[...]
>> 2. Some of the macros are refactored versions of the Autoconf macros (NOT
>> the
>> Autoconf Archive macros). For example, I copied AC_FC_LINE_LENGTH from
>> fortran.m4 to a separate file, renamed the macro to ACX_FC_LINE_LENGTH,
>> fixed a
>> couple of issues and refactored it. As far as I understand, I must copy the
>> whole license header from fortran.m4 to my file. There are three things
>> that I
>> am not sure about:
>> a) may I omit the first two lines saying that "This file is part of
>>Autoconf..." because they look a bit misleading in this context?
>> b) may I add an extra copyright line?
>> c) do I have to provide any extra information in the file?
> 
> The answers to (a) and (b) are "yes". For (c) it's "no". However, you must
> redistribute a copy of the GPL itself (the "COPYING" file in Autoconf)
> though of course this is not in the .m4 file itself.

I want to add that when distributing ("conveying") any modified versions,
section 5 of the GNU GPL version 3 requires that the work "carry
prominent notices stating that you modified it, and giving a relevant
date."  Previous versions of the GPL have a very similar requirement.

There are probably many ways to achieve this but for a single file
it seems to me that the most straightforward way is to add another
notice near the license notice stating that the file was modified
from its original version by whomever on whatever date.  I suggest
also including a brief summary of important differences but this
goes beyond the minimum requirements of the GPL.

Cheers,
  Nick



Re: [sr #110382] In autoconf-2.69d AC_LANG_SOURCE implicitly includes '#include "confdefs.h"'

2023-12-11 Thread Nick Bowler
On 2023-12-08 09:25, Zack Weinberg wrote:
> Thinking about this one some more, I see two ways to make confdefs.h
> idempotent:
> 
> 1. A conventional "multiple inclusion guard", wrapping the body of the file in
> #ifndef ... #endif.  This will work on all compilers and regardless of whether
> confdefs.h is included or concatenated with the test program, but requires us
> to change AC_DEFINE and friends to do something like
> 
> 
> #define MACRO 1
> 
> 
> The thing I'm worried about with this one is if there's third party macros out
> there somewhere that bypass AC_DEFINE.  The sed construct up there should be
> portable.

I think your sed construct must have been eaten somewhere along the line.  
Without
being able to see the suggestion, I imagine that using sed to rewrite confdefs.h
every time is going to incur quadratic runtime complexity while simply appending
new definitions to an existing file normally does not.

>From what I understand of this issue, I don't think we should change anything
in Autoconf because there does not seem to be any real advantage to fixing this.

  - Excluding the NEWS item which references this one report, the Autoconf 
documentation does not mention confdefs.h even once.  This issue can
only affect packages relying on undocumented Autoconf internals.  It
does not affect normal usage of AC_LANG_SOURCE and friends.

  - We have exactly one report of problems (apr), and this package has been
updated to use AC_LANG_PROGRAM since this was reported.

  - The consequence of this issue is just a compiler warning from a few
versions of one compiler (admittedly an important one).

The bogus warning should just be fixed in gcc.

> 2. Alternatively, use #pragma once and change AC_LANG_CONFTEST(C) to #include
> confdefs.h rather than prepending its contents to the test program.  This
> would keep macros that bypass AC_DEFINE working, but would break with
> compilers that make #pragma once do something wacky (it should be no worse
> than the status quo with compilers that merely _ignore_ #pragma once). I also
> wonder if there's a concrete reason why AC_LANG_CONFTEST(C) _shouldn't_
> #include confdefs.h.

Using nonstandard pragmas seems even worse to me, and just moves the problem
elsewhere (many compilers will warn by default if they encounter "#pragma once"
by default, including older versions of gcc!).

If you really want to work around this gcc bug then it is probably sufficient
to just install special cases in the internal Autoconf macros that generate
these definitions of the problematic macros, for example (untested):

  #ifndef AC_DEFINED__STDC_WANT_IEC_60559_ATTRIBS_EXT__
  #define AC_DEFINED__STDC_WANT_IEC_60559_ATTRIBS_EXT__
  #define __STDC_WANT_IEC_60559_ATTRIBS_EXT__ 1
  #endif

Then this could still be inserted to confdefs.h by appending and does not
rely on any nonstandard pragmas.

Cheers,
  Nick



Re: [sr #110846] cross-compilation is not entered when build_alias and host_alias are the same

2023-12-07 Thread Nick Bowler

On 2023-12-07 21:28, Zack Weinberg wrote:

Follow-up Comment #1, sr#110846 (group autoconf):

We regret the delay in responding to this bug report.

I believe this is the same as #110348.  The proposal there would make it so
you could force a configure script into cross-compilation mode, even when the
build and host triples are the same and the cross-compiled executables can run
on the build system, by specifying --host and *not* --build.  Would that work
for you?  (Please reply in #110348, I'm going to close this report.)

This will not happen for 2.72 but hopefully will for the release after that.


You can already force cross-compilation mode today by running

  ./configure cross_compiling=yes

I think it would be better to simply document this more clearly. 
Changing the behaviour of --host (without --build) to force cross 
compilation mode seems ill-advised.  As I recall, this was tried before 
some years back and there were complaints by people cross-building with 
mingw on GNU/Linux build systems, who actually wanted the runtime tests 
to be done.


I think the current wording in the manual (which de-emphasized the 
recommendation to always specify --build and --host together) came as a 
result of those complaints, and it explicitly mentions the mingw use case.


Cheers,
  Nick



Re: [GNU Autoconf 2.71] testsuite: 254 255 ... catastrophic failure

2023-11-29 Thread Nick Bowler
On 2023-11-29 14:29, Zack Weinberg wrote:
> On Wed, Nov 29, 2023, at 9:32 AM, Dennis Clarke via Bug reports for autoconf 
> wrote:
[...]
>> --- ./at_config_vars-state-env-expected 2023-11-29 09:14:04.189405540 -0500
>> +++ ./at_config_vars-state-env.after2023-11-29 09:14:04.189405540 -0500
>> @@ -47,7 +47,7 @@
>>   PWD=/root/autoconf-2.71/tests/testsuite.dir/254
>>   SHELL=/bin/bash
>>   SHELLOPTS=braceexpand:hashall:interactive-comments:posix
>> -SHLVL=2
>> +SHLVL=3
>>   TAR_OPTIONS='--owner=0 --group=0 --numeric-owner'
>>   TERM=xterm-256color
>>   UID=0
> 
> Changes in the value of SHLVL are supposed to be ignored.

No current Autoconf release includes the fix[1] to ignore SHLVL in the test 
suite.

Current versions of devuan ship bash-5.2 which is probably why this failure is 
happening.
Devuan, like debian, ships a crippled dash (with broken $LINENO) as /bin/sh so 
Autoconf's
test suite by default re-execs itself with /bin/bash and then hits all these 
problems.

This might be a sufficient workaround:

  % make check run_testsuite='CONFIG_SHELL=/bin/sh /bin/sh tests/testsuite -C 
tests MAKE=$(MAKE)'

[1] 
https://git.savannah.gnu.org/gitweb/?p=autoconf.git;a=commitdiff;h=412166e185c00d6eacbe67dfcb0326f622ec4020

Cheers,
  Nick



Re: m4_ax_check_typedef.m4 fixes

2023-11-11 Thread Nick Bowler

Hi,

On 2023-11-11 04:34, stsp wrote:

Hi, I've found the m4_ax_check_typedef.m4 script here:
http://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_check_typedef.m4


This is the Autoconf list, which despite the confusingly similar name is 
unrelated to the "Autoconf Archive" project.



It appears to be very buggy:


Yes, we unfortunately we get a lot of reports of problems in the 
Autoconf Archive sent to the Autoconf lists.



Attached is the fixed version of a script.

According to

  https://www.gnu.org/software/autoconf-archive/How-to-contribute.html

The preferred way to submit patches to the autoconf archive is via the 
patch tracker[1] on Savannah.  You could also try the mailing list[2] at 
autoconf-archive-maintain...@gnu.org (but this is not mentioned on the 
how to contribute page).


[1] http://savannah.gnu.org/patch/?func=additem=autoconf-archive
[2] https://lists.gnu.org/mailman/listinfo/autoconf-archive-maintainers

Hope that helps,
  Nick



Re: How to get autoconf to respect CC="gcc -std=c89"?

2023-10-09 Thread Nick Bowler
On 2023-10-08, Niels Möller  wrote:
> I would have expected that every input that is valid c89 also is valid
> c99, so that support for c99 strictly implies support for c89. But there
> may be some corner case I'm not aware of.

It is not true in general that a valid C89 program is also valid
C99, but it is essentially always the case for "normal" programs and
especially for "portable" programs which simply have to deal with the
fact that you can't really rely on any compiler providing perfect
strict standards conformance modes.

The most obvious difference is probably that "restrict" can be used
as an identifier in C89 (and it is not reserved anywhere) but this is
disallowed in C99 as "restrict" is lexically a keyword.

Furthermore, it is also not true that a program valid for both standards
will necessarily do the same thing.  It is easy to construct examples of
such behaviour differences deliberately but it probably never happens
inadvertently.

Here's one example of such a program (compare output when compiled
with gcc -std=c89 versus gcc -std=c99):

  #include 

  int main(void)
  {
printf(
  "the compiler parses comments like C%d\n", 88 + 11 //**/ 11
);
return 0;
  }

Here's another:

  #include 

  enum { VER = 99 };
  int main(void)
  {
 if ((enum { VER = 89 })0)
;
 printf("the compiler implements block scopes like C%d\n", VER);
 return 0;
  }

Cheers,
  Nick



Re: How to get autoconf to respect CC="gcc -std=c89"?

2023-10-09 Thread Nick Bowler
On 2023-10-08, Niels Möller  wrote:
> I would have expected that every input that is valid c89 also is valid
> c99, so that support for c99 strictly implies support for c89. But there
> may be some corner case I'm not aware of.

It is not true in general that a valid C89 program is also valid
C99, but it is essentially always the case for "normal" programs and
especially for "portable" programs which simply have to deal with the
fact that you can't really rely on any compiler providing perfect
strict standards conformance modes.

The most obvious difference is probably that "restrict" can be used
as an identifier in C89 (and it is not reserved anywhere) but this is
disallowed in C99 as "restrict" is lexically a keyword.

Furthermore, it is also not true that a program valid for both standards
will necessarily do the same thing.  It is easy to construct examples of
such behaviour differences deliberately but it probably never happens
inadvertently.

Here's one example of such a program (compare output when compiled
with gcc -std=c89 versus gcc -std=c99):

  #include 

  int main(void)
  {
printf(
  "the compiler parses comments like C%d\n", 88 + 11 //**/ 11
);
return 0;
  }

Here's another:

  #include 

  enum { VER = 99 };
  int main(void)
  {
 if ((enum { VER = 89 })0)
;
 printf("the compiler implements block scopes like C%d\n", VER);
 return 0;
  }

Cheers,
  Nick



Re: Evaluating arithmetic expressions with macros (eval)

2023-10-04 Thread Nick Bowler
Hi,

On 2023-10-04, Sébastien Hinderer  wrote:
> I find myself stuck with something which I assume is trivial. I define:
>
> m4_define([X], [9])
> m4_define([Y], [3])
>
> And I would like to define Z as being the arithmetic sum of X and Y and
> can seem to get it.
>
> I tried several variations of eval but had no success. I understand
> that all the macros need to be expanded before eval is called but I
> don't understand how to do it.

The short answer for your specific example is to simply not quote the
arguments:

m4_define([Z], m4_eval(X + Y))

This approach is probably sufficient for most typical uses of m4_eval,
as there would seem to be little chance of unwanted macro expansion.

Some details, when m4 sees the following macro expansion:

m4_define([Z], m4_eval(X + Y))

- The first argument contains no unquoted text, so no macro expansion is
  performed.  The quotes are removed, and the first argument is "Z".

- The second argument contains unquoted text and a macro:

   m4_eval(X + Y).

   - The first argument of this contains unquoted text with two macros,
 X and Y.  These are replaced with 9 and 3, respectively.  There
 are no further macros to expand and no quotes to remove, so the
 actual argument to m4_eval is 9 + 3.

   Now m4_eval(9 + 3) is expanded, giving 12.  There are no further
   macros to expand and no quotes to remove, so the second argument
   to m4_define is 12.

Now m4_define(Z, 12) is expanded.

Hope that helps,
  Nick



Re: tcc 0.9.28rc testing: bug in autoconf 2.71 with AC_CHECK_DEFINE

2023-09-25 Thread Nick Bowler
On 24/09/2023, Peter Johansson  wrote:
> Hi Detlef and Nick,
>
> On 24/9/23 11:03, Nick Bowler wrote:
>> The word AC_CHECK_DEFINE is not found anywhere in the Autoconf
>> source code or documentation.
>
> My guess would be that the 3rd party is the autoconf archive because
> they provide both AX_CHECK_DEFINE and AC_CHECK_DEFINE
>
> http://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_check_define.m4

OK, I see.

So, depending on the system you are running Autoconf on, GNU m4 may pre-
define __unix__ as an m4 macro which expands to the empty string.

Ignoring the fact that the this macro definition flagrantly disregards
the Autoconf reserved namespace... AC_CHECK_DEFINE here is quoting
inconsistently: $1 is double-quoted in the argument to AC_LANG_PROGRAM,
but it is only single-quoted in the arguments of AS_VAR_PUSHDEF and
AC_CACHE_CHECK.

So no amount of quoting at the call site will ever solve the problem for
Detlef.  We can quote __unix__ correctly for AC_LANG_PROGRAM, or we can
quote it for the other expansions, but never both at the same time.

If unwilling to fix the quoting issues within AC_CHECK_DEFINE, one way
to work around the problem is by using a quadrigraph to prevent __unix__
from being recognized as a macro, since quadrigraphs are removed only
after all macro processing is complete.  For example:

  AC_CHECK_DEFINE([__unix@@__], [...])

Another option is to redefine __unix__ as this GNU m4 feature seems
pretty unlikely to be useful in Autoconf, and even then it seems
unlikely to matter exactly what text __unix__ expands to:

  m4_ifdef([__unix__], [m4_define([__unix__], [[__unix__]])])dnl
  AC_CHECK_DEFINE([__unix__], [...])

I don't think there is any regression in Autoconf here, I don't see any
significant difference in behaviour between Autoconf 2.72c, 2.71 or
2.69.

Hope that helps,
  Nick



Re: tcc 0.9.28rc testing: bug in autoconf 2.71 with AC_CHECK_DEFINE

2023-09-23 Thread Nick Bowler
On 2023-09-23, Nick Bowler  wrote:
> On 2023-09-23, Detlef Riekenberg  wrote:
>> AC_CHECK_DEFINE(__unix, CFLAGS="-DFOUND__unix $CFLAGS")
>> AC_CHECK_DEFINE(__unix__, CFLAGS="-DFOUND__unix__ $CFLAGS")
>> AC_CHECK_DEFINE(__linux__, CFLAGS="-DFOUND__linux__ $CFLAGS")
[...]
> So it sounds like there must be some third party code involved which
> is defining this macro (and this code is defining macros in the AC_*
> namespace to make it look like it came from Autoconf when in fact it
> did not).

Just to add, you don't need any third party macros to check for typical
C predefined macros including __unix, etc.  I would write such checks
something like this (untested):

  AC_COMPUTE_INT([unix_val], [__unix], [@@], [unix_val=0])
  AS_IF([test $unix_val -ne 0],
[put code here to run when __unix is defined and is non-zero])

That works out of the box with Autoconf and should be a very robust
check.  The third argument to AC_COMPUTE_INT is @@ to prevent
Autoconf from inserting the default #includes in the test program.

If you need to distinguish the case where __unix may be pre-defined
with the value 0 (probably not relevant with this particular macro),
then you can tweak the action-if-failed (fourth argument) and if
condition a bit.

Hope that helps,
  Nick



Re: tcc 0.9.28rc testing: bug in autoconf 2.71 with AC_CHECK_DEFINE

2023-09-23 Thread Nick Bowler
Hi,

On 2023-09-23, Detlef Riekenberg  wrote:
> During testing of tcc 0.9.28rc,
> I found a strange bug in autoconf 2.71 with `AC_CHECK_DEFINE` for
> `__unix__`.

The word AC_CHECK_DEFINE is not found anywhere in the Autoconf
source code or documentation.

So I am afraid it is a bit difficult to understand this report.

[snip]
> Please test this snipped with the current 2.72rc:
> ```
> AC_CHECK_DEFINE(__unix, CFLAGS="-DFOUND__unix $CFLAGS")
> AC_CHECK_DEFINE(__unix__, CFLAGS="-DFOUND__unix__ $CFLAGS")
> AC_CHECK_DEFINE(__linux__, CFLAGS="-DFOUND__linux__ $CFLAGS")

As expected, if I translate this into an "obvious" configure.ac by
adding the mandatory AC_INIT and AC_OUTPUT expansions, I just get:

  configure.ac:3: error: possibly undefined macro: AC_CHECK_DEFINE
  [...]

So it sounds lkie there must be some third party code involved which
is defining this macro (and this code is defining macros in the AC_*
namespace to make it look like it came from Autoconf when in fact it
did not).

That doesn't mean there isn't a bug in Autoconf.  It would be helpful if
you could produce a complete self-contained test case which demonstrates
the problem, one that doesn't depend on code from outside of Autoconf
which we don't necessarily have.

Cheers,
  Nick



Re: AC_SYS_LARGEFILE

2023-09-11 Thread Nick Bowler
On 2023-09-11, Sébastien Hinderer  wrote:
> I am writing with a quesiton about the AC_SYS_LARGEFILE macro.
>
> It would be great to be able to use it but according to its
> documentation, the macro adds its flags to the CC output variable, which
> is not convenient in my context because we have dedicated variables not
> only for C flags but also for C preprocessor flags.

Autoconf is designed to facilitate build systems that comply with the
GNU coding standards.

CFLAGS and CPPFLAGS cannot be used for large-file support because
these flags are required for proper compilation, and the standards
say such flags don't go into CFLAGS or CPPFLAGS.  This is because
the user is supposed to be able to override these variables.  For
example:

  % ./configure
  % make CFLAGS=-g3

would almost certainly break horribly if configure put any large-file
support options into CFLAGS.

Looking at the code, CC is modified only if the -n32 option is needed
to enable large-file support.  The comments suggest this is required
on IRIX.  If large-file support can be enabled by preprocessor macros
(which I imagine is the case on all current systems), AC_DEFINE is used.

It has been this way since the macro was originally added to Autoconf.
I can only speculate as to why the original author used CC, but the
reason is probably so that you can just take an existing package and
just add AC_SYS_LARGEFILE with no other modifications and it will
almost certainly work without any major problems.

Anything else would likely fail to comply with the standards or would
require package maintainers to edit their makefiles to ensure some new
variable is included on every compiler command line.

If they miss one, then their program would work perfectly almost
everywhere but fail when building on IRIX (probably a more serious
concern when this macro was added back in 2000 than it is today).

Furthermore, in the real world, package authors are notoriously bad at
ensuring CFLAGS is properly passed to every single C compiler invocation.
But people are usually much better at using CC consistently.

> Thus, what I'd ideally need is a version of the macro where I can
> somehow specify what to do with both large-file related CFLAGS and
> CPPFLAGS.

If you really don't want configure modifying CC on IRIX, while still
complying with the GNU coding standards, then you can do something like
this instead (untested):

  save_CC=$CC
  AC_SYS_LARGEFILE
  AS_IF([test x"$CC" != x"$save_CC"],
  [dnl The undocumented cache variable ac_cv_sys_largefile_CC
  dnl here exists in every version of Autoconf with AC_SYS_LARGEFILE;
  you could also dnl pick apart $CC to find out what flags were added.
  AC_SUBST([$LARGEFILE_FLAGS], [$ac_cv_sys_largefile_CC])
  CC=$save_CC])

Then, modify your Makefiles to ensure $(LARGEFILE_FLAGS) is included
on every compiler command line.

Hope that helps,
  Nick



Re: libtool (use with autotest)

2023-07-24 Thread Nick Bowler
On 2023-07-24, Simon Sobisch  wrote:
>
> I hope to possibly get an answer by moving this question to the
> appropriate lists :-)
> For more context I provide the original responses to this topic.
>
> Am 06.07.2023 um 14:55 schrieb Jose E. Marchesi:
>>
>>> On 2023-07-03 17:16:59 +0200, Bruno Haible wrote:
 Someone wrote:
> Without relinking at install time, I don't see how tests can
> reliably load the just-built library from the sources (objdir
> really) rather than loading the installed library.  Unless
> perhaps there is a belief that LD_LIBARY_PATH is reliable and
> supercedes, and there are wrappers

 Yes, on all ELF systems, libtool creates wrappers that set
 LD_LIBRARY_PATH, for all programs that link to shared libraries in
 the build dir.
>>>
>>> But wrappers have drawbacks: they make the use of gdb or valgrind
>>> less convenient.
>>
>> Just a tiny bit less convenient:
>>
>> $ libtool --mode=execute gdb ./prog
>> $ libtool --mode=execute valgrind ./prog
>
> Just to recheck:
>
> When using both autotest (autoconf) generated testsuites and libtool,
> then how should we handle the following, given that we generate
>
> bin/runner
> bin2/compiler
> runtime/librun
>
> * specify binaries to test AT_TESTED
> They are not in PATH, so should we add the libtool generated binaries'
> path to PATH for `make check` before the testsuite is executed?

The normal way is to set AUTOTEST_PATH so that all the programs under
test are in it.  When using Autoconf, this is usually done via the
the second argument to the AC_CONFIG_TESTDIR macro.

I must be missing some context here, as I'm afraid I don't understand
what the problem is.  To use valgrind in an Autotest test suite together
with libtool, I would do something like this (untested):

  m4_divert_text([PREPARE_TESTS], [: ${LIBTOOL="$SHELL $builddir/libtool"}
  ])

  AT_TESTED([my_program])

  AT_SETUP([my test w/ valgrind])

  AT_CHECK([$LIBTOOL --mode=execute valgrind my_program], [...])
  [...]

  AT_CLEANUP

> Bonus:
> How to do this in a way that allows `make installcheck`?

If you use AUTOTEST_PATH to locate the programs under test, I wouldn't
expect there to be any particular problem with installcheck.

Hope that helps,
  Nick



Re: AC_PROVIDE{_IFELSE} not documented?

2023-06-18 Thread Nick Bowler
On 18/06/2023, Karl Berry  wrote:
> Maybe I'm going blind, but it seems that AC_PROVIDE and
> AC_PROVIDE_IFELSE are not documented in the Autoconf manual. AC_PROVIDE
> is mentioned but not described.  AC_PROVIDE_IFELSE is not mentioned.

When I reported this many years ago, the response[1] at the time was
"the macros are stable and intended to be usable; we're just missing
the documentation patch."

But neither I nor anyone else stepped up to actually write it.

[1] https://lists.gnu.org/archive/html/autoconf/2012-12/msg0.html

Cheers,
  Nick



Re: Which Perl versions Autoconf needs [PATCH included]

2023-03-30 Thread Nick Bowler
On 2023-03-30, Zack Weinberg  wrote:
[...]
> Because I don't think anyone else currently active in development has
> either the time or the expertise for it.  (Just as a data point here,
> the oldest version of Perl that I myself have any access to, presently,
> is 5.*16*, and I don't know how long that machine will stay that way.)

FWIW I still use Autoconf (actually, just autom4te/autotest) on machines
with perl 5.8.  These installations do have both Digest::SHA and Time::HiRes
available, though.

Though I am still using Autoconf 2.69 on these systems, as I'm not aware
of any Autotest-related problems that would be fixed by upgrading.

Happy to give new versions a go though.

Cheers,
  Nick



Re: AC_PROG_EGREP and $EGREP_TRADITIONAL and shell conditional statements

2023-03-28 Thread Nick Bowler
On 2023-03-28, Zack Weinberg  wrote:
> Can someone who understands the problem described at
> https://lists.gnu.org/archive/html/autoconf/2022-11/msg00129.html
> please construct a minimal, self-contained configure.ac that
> reproduces that problem?  It is difficult for me to tell whether
> anything needs to be fixed in Autoconf from this report, and I don't
> have time in the foreseeable future to try to cut down APR's gigantic
> configure.ac myself.

This should be a good approximation:

  % cat >configure.ac <<'EOF'
  AC_INIT([test], [0])

  AC_PROG_CPP
  AC_PROG_EGREP

  # uncomment to make this work on new autoconf
  # m4_ifdef([_AC_PROG_EGREP_TRADITIONAL], [_AC_PROG_EGREP_TRADITIONAL])

  if false; then
AC_EGREP_HEADER([printf], [stdio.h])
  else
AC_MSG_CHECKING([if stuff works])
AC_EGREP_HEADER([malloc], [stdlib.h],
  [AC_MSG_RESULT([ok])], [AC_MSG_RESULT([nope])])
  fi

  AC_OUTPUT
EOF

This works in autoconf 2.69, not in current master (though it works if
you uncomment the indicated line).

IMO it is reasonable to fix this in Autoconf, because it just seems
weird to me that AC_PROG_EGREP does not include the necessary egrep
setup for AC_EGREP_HEADER to work (it used to).

Cheers,
  Nick



Re: [sr #110846] cross-compilation is not entered when build_alias and host_alias are the same

2023-03-01 Thread Nick Bowler
On 2023-03-01, anonymous  wrote:
> This might be the desired use case, but when cross compiling with
> systems like buildroot one might have the same architecture on
> --build and --host.
>
> Example: Compile on Apple silicon (aarch64-unknown-linux-gnu) for a Cortex
> A75 based system (aarch64-unknown-linux-gnu). Cross compiling isn't
> automatically detected.

By setting --host and --build to the same value, this explicitly forces
non-cross-compilation mode in configure.

If you specify --host without also specifying --build, then configure will
run the auto-detection which I expect will work properly for you.

Probably the "vendor" field of the host triplet should have been set to
something different for these different systems, but I digress...

> And there seems to be no way to force it if we know that we are cross
> compiling.

Nevertheless, you can always force cross compilation mode by explicitly
setting cross_compiling=yes, for example:

  % ./configure cross_compiling=yes

Hope that helps,
  Nick



Re: Possible regressions with trunk autoconf (vs 2.71)

2022-11-18 Thread Nick Bowler
On 2022-11-18, Frederic Berat  wrote:
> The apr program has shown a weird behavior during configure execution:
[...]
> I found that the problem was actually that "$EGREP_TRADITIONAL" was
> undefined during the execution of AC_TYPE_UID_T.
> While the corresponding symbol was constructed within a case/esac earlier
> in configure, it isn't made available for the outer context, which leads to
> the false negative.

On the apr side, the fix is probably to rewrite the problematic case
statement using AS_CASE.  This will allow Autoconf to "hoist" the
expansion of _AC_PROG_EGREP_TRADITIONAL outside of the condition
so it actually gets executed all the time.

That being said ...

> I tried to add a "AC_PROG_EGREP" at the beginning of the conigure.in, but
> that doesn't change anything, since _AC_PROG_EGREP_TRADITIONAL isn't
> required by it.
>
> The patch below solves the problem (without changes in apr), but that looks
> a bit dirty ad AC_PROG_EGREP doesn't directly need
> _AC_PROG_EGREP_TRADITIONAL:
>
> diff --git a/lib/autoconf/programs.m4 b/lib/autoconf/programs.m4
> index 618f3172..5e206b13 100644
> --- a/lib/autoconf/programs.m4
> +++ b/lib/autoconf/programs.m4
> @@ -363,6 +363,7 @@ AC_DEFUN([AC_PROG_AWK],
>  # -
>  AC_DEFUN([AC_PROG_EGREP],
>  [AC_REQUIRE([AC_PROG_GREP])dnl
> +AC_REQUIRE([_AC_PROG_EGREP_TRADITIONAL])dnl
>  AC_CACHE_CHECK([for egrep], ac_cv_path_EGREP,
> [if echo a | $GREP -E '(a|b)' >/dev/null 2>&1
> then ac_cv_path_EGREP="$GREP -E"

... something like this seems reasonable to me, especially if it solves
the problem in apr, as I think it makes logical sense that AC_PROG_EGREP
would do the necessary setup for AC_EGREP_CPP to work.

Cheers,
  Nick



Re: On time64 and Large File Support

2022-11-15 Thread Nick Bowler
On 2022-11-15, Zack Weinberg  wrote:
> On Tue, Nov 15, 2022, at 12:49 PM, Nick Bowler wrote:
>> On 2022-11-13, Zack Weinberg  wrote:
>>> I have not pushed this, and have only tested it lightly on a current
>>> Linux.
>>> It needs testing on weird old systems, particularly old AIX, HP-UX,
>>> MinGW.
>>
>> I'd be happy to give it a go on my weird old systems ...
>
> I forgot to mention at the time:  Testing on systems *where time_t is only
> 32 bits wide by default* is especially useful.
>
>> /bin/sh ./config.status --recheck
>> running CONFIG_SHELL=/bin/sh /bin/sh ./configure --no-create
>> --no-recursion
>> ./configure: 6: cannot create .: Is a directory
>> ./configure: 6: cannot create .: Is a directory
>> [...]
>> checking for Perl >=5.10.0 with Time::HiRes::stat... configure:
>> error: no acceptable perl could be found in $PATH.
>> Perl 5.10.0 or later is required, with Time::HiRes::stat.
>> make: *** [Makefile:969: config.status] Error 1
>
> The procedure you used should have worked, assuming $PATH did not change
> from step to step.  One possible explanation is that there's a bug with
> building in the source directory -- at step 2, try instead
>
> mkdir _build
> cd _build
> ../configure
>
> Another possible explanation is that the bootstrap operation didn't set file
> timestamps accurately (perhaps because the filesystem you're on doesn't
> support high-resolution time stamps) and so it's trying to regenerate
> 'configure' with an _older_ autoconf which trips over some state left by the
> bootstrap process.  Another thing to try is
>
> ./bootstrap
> sleep 2 && touch aclocal.m4 && sleep 2 && touch Makefile.in && sleep 2 &&
> touch configure
> ./configure
> sleep 2 && touch config.status && sleep 2 && touch tests/aclocal Makefile
> lib/version.m4
> make
>
> Please let us know if either of those things helps.

It does appear to be regenerating configure with an old autoconf version (2.69).

But neither suggestion makes any difference.  Timestamps seem OK; it
appears that make is deciding to aclocal.m4 (and then configure) because
of prerequisites that do not exist outright:

  % make -d
  [...]
   Considering target file 'aclocal.m4'.
  [...]
  Prerequisite 'autoconf/autoupdate.m4' of target 'aclocal.m4'
does not exist.
  Prerequisite 'autoconf/autoscan.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/general.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/status.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/autoheader.m4' of target 'aclocal.m4'
does not exist.
  Prerequisite 'autoconf/autotest.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/programs.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/lang.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/c.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/erlang.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/fortran.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/go.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/functions.m4' of target 'aclocal.m4' does
not exist.
  Prerequisite 'autoconf/headers.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/types.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/libs.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/specific.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'autoconf/oldnames.m4' of target 'aclocal.m4' does not exist.
  Prerequisite 'm4/autobuild.m4' is older than target 'aclocal.m4'.
  Prerequisite 'm4/m4.m4' is older than target 'aclocal.m4'.
  Prerequisite 'm4/make-case.m4' is older than target 'aclocal.m4'.
  Prerequisite 'm4/perl-time-hires.m4' is older than target 'aclocal.m4'.
  Prerequisite 'configure.ac' is older than target 'aclocal.m4'.
 Must remake target 'aclocal.m4'.

(as there is a dummy rule for all of these files their non-existence triggers
a rebuild instead of a fatal error).

OK, the files seem to be in lib/autoconf in the repository, so I used the
following procedure which seems to to work:

   % mkdir autoconf
   % cp lib/autoconf/*.m4 autoconf/
   % ./bootstrap
   % ./configure
   % make

Cheers,
  Nick



Re: On time64 and Large File Support

2022-11-15 Thread Nick Bowler
[dropping non-autoconf lists from Cc]

On 2022-11-13, Zack Weinberg  wrote:
> I have not pushed this, and have only tested it lightly on a current Linux.
> It needs testing on weird old systems, particularly old AIX, HP-UX, MinGW.

I'd be happy to give it a go on my weird old systems ...

>
> I don't think a 2.72 release tomorrow is realistic anymore.  The soonest
> after that I will be able to do one is next weekend, but that should give
> people time to experiment with this.

... but I'm unable to build current git master at all:

  % ./bootstrap
[ok, no errors]

  % ./configure
[...]
checking for Perl >=5.10.0 with Time::HiRes::stat... /usr/bin/perl
[...]
[ok, no errors]

  % make
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
'/srv/home/nbowler/misc/autoconf/build-aux/missing' aclocal-1.16 -I m4
configure.ac:28: warning: AC_INIT: not a literal: bug-autoc...@gnu.org
 cd . && /bin/sh /srv/home/nbowler/misc/autoconf/build-aux/missing
automake-1.16 --gnu
configure.ac:28: warning: AC_INIT: not a literal: bug-autoc...@gnu.org
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
'/srv/home/nbowler/misc/autoconf/build-aux/missing' autoconf
configure.ac:28: warning: AC_INIT: not a literal: bug-autoc...@gnu.org
/bin/sh ./config.status --recheck
running CONFIG_SHELL=/bin/sh /bin/sh ./configure --no-create --no-recursion
./configure: 6: cannot create .: Is a directory
./configure: 6: cannot create .: Is a directory
[...]
checking for Perl >=5.10.0 with Time::HiRes::stat... configure:
error: no acceptable perl could be found in $PATH.
Perl 5.10.0 or later is required, with Time::HiRes::stat.
make: *** [Makefile:969: config.status] Error 1

Am I missing a step here?  I normally build from releases.

It looks like the bootstrap procedure has changed compared to 2.71,
which builds OK
for me from git (using autoreconf instead of the bootstrap script).

Cheers,
  Nick



Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-10 Thread Nick Bowler
On 2022-11-10, Zack Weinberg  wrote:
> The biggest remaining (potential) problem, that I’m aware of, is that
> AC_CHECK_FUNC unconditionally declares the function we’re probing for
> as ‘char NAME (void)’, and asks the compiler to call it with no
> arguments, regardless of what its prototype actually is.  It is not
> clear to me whether this will still work with the planned changes to
> the compilers.  Both GCC 12 and Clang 14 have on-by-default warnings
> triggered by ‘extern char memcpy(void);’ (or any other standard
> library function whose prototype is coded into the compiler) and this
> already causes problems for people who run configure scripts with
> CC='cc -Werror'.  Unfortunately this is very hard to fix — we would
> have to build a comprehensive list of library functions into Autoconf,
> mapping each to either its documented prototype or to a header where
> it ought to be declared; in the latter case we would also have to make
> e.g. AC_CHECK_FUNCS([getaddrinfo]) imply AC_CHECK_HEADERS([sys/types.h
> sys/socket.h netdb.h]) which might mess up configure scripts that
> aren’t expecting headers to be probed at that point.
>
> How important do you think it is for this to be fixed?

My gut feeling is that Autoconf should just determine the necessary
options to get compatible behaviour out of these modern compilers, at
least for the purpose of running configure tests.  For example, Autoconf
should probably build the AC_CHECK_FUNC programs using gcc's
-fno-builtin option, which should avoid problems with gcc complaining
about memcpy (and may also improve test accuracy, since gcc won't use
its knowledge of C library behaviour to possibly elide the call to
memcpy).

It saddens me to see so much breakage happening in "modern C", a
language that has (until now) a long history of new language features
being carefully introduced to avoid these sort of problems.

The fact that even the C standard authors don't even seem to care about
existing codebases is a concerning change in direction.  Nobody has
learned anything from the Python 3 debacle, I guess.

> p.s. GCC and Clang folks: As long as you’re changing the defaults out
> from under people, can you please also remove the last few predefined
> user-namespace macros (-Dlinux, -Dunix, -Darm, etc) from all the
> -std=gnuXX modes?

Meh, even though these macros are a small thing I don't accept the
"things are breaking anyway so let's break even more things" attitude.
This was something that many library authors did during the python 3
transition and that just made the problems orders of magnitude more
horrible.

Cheers,
  Nick



Re: [sr #110687] AC_C_BIGENDIAN fails when cross-compiling with -std=c11 and -flto

2022-07-27 Thread Nick Bowler
On 2022-07-27, anonymous  wrote:
> Follow-up Comment #2, sr #110687 (project autoconf):
>
>> It appears to me that this is an issue with cross compilation and strict
> conformance mode (-std=c11), not with -flto.  Could you please report what
> happens, using the same cross-compilation toolchain, if you run the same
> configure command but using CFLAGS="-std=c11" LDFLAGS="" ?
>
> In that case (only -std=c11) everything works as expected.
>
> checking for unistd.h... yes
> checking whether byte ordering is bigendian... no
> configure: creating ./config.status
> config.status: creating issue618config.h
>
>
> The issue only occurs when using -flto.

This is not all that surprising.

Since Autoconf cannot run programs when cross compiling, Autoconf has
to use some other method.  One thing it tries is to use various
extensions found in the system header files, however this is obviously
not working with glibc when -std=c11 is used to disable extensions.

The other method inspects the code in compiled object files.  However,
-flto prevents generation of object files which are usable for this
purpose (as all the backend work is deferred to link time).  If you
use -ffat-lto-objects as well, the test should work again.

To fix this in Autoconf, probably just #defining _GNU_SOURCE or similar
directly in the test programs used by AC_C_BIGENDIAN can solve the
problem with the library header test.

To fix this in your package, I suggest just providing an action-if-unknown
argument to AC_C_BIGENDIAN, and adjust your package to work in that case
(usually this can be done easily with a runtime test).

In general, I don't recommend using strict standards-conformance modes
ever unless you are implementing standards-conformance test suites.
Such options typically cause more portability problems than they solve.

Cheers,
  Nick



Re: Generating configuration files conditionnally?

2022-06-16 Thread Nick Bowler
On 2022-06-16, Sébastien Hinderer  wrote:
> Is it possible to have the META files produced from META.in files but
> only if the package they describe has been enabled?

There should be no problem using AC_CONFIG_FILES conditionally.
For example:

  % cat >configure.ac <<'EOF'
AC_INIT([test], [0])

AC_ARG_ENABLE([foo])

AS_IF([test x"$enable_foo" = x"yes"], [AC_CONFIG_FILES([foo])])
AC_CONFIG_FILES([bar])

AC_OUTPUT
EOF
  % autoconf

  % ./configure --disable-foo
  configure: creating ./config.status
  config.status: creating bar

  % ./configure --enable-foo
  configure: creating ./config.status
  config.status: creating foo
  config.status: creating bar

Hope that helps,
  Nick



Re: Parallelization of shell scripts for 'configure' etc.

2022-06-14 Thread Nick Bowler
On 14/06/2022, Nick Bowler  wrote:
> On 2022-06-14, Michael Orlitzky  wrote:
>> On Mon, 2022-06-13 at 15:39 -0700, Paul Eggert wrote:
>>>
>>> I've wanted something like this for *years* (I assigned a simpler
>>> version to my undergraduates but of course it was too much to expect
>>> them to implement it) and I hope some sort of parallelization like this
>>> can get into production with Bash at some point (or some other shell if
>>> Bash can't use this idea).
>>>
>>
>> It looks like PaSh itself was designed and built well. The authors use
>> multiple test suites and PaSh is comparable to other shells in
>> correctness. Ultimately you wouldn't want a runtime dependency on
>> python in your /bin/sh, but as a first step... can PaSh run ./configure
>> already?
>
> The answer seems to be no.  For fun, I just tried pash-0.8 on a configure
> script and the shell itself crashes immediately.
>
> It was very difficult to install so I might not be smart enough to use it
> and botched the installation.  But pash appears to not understand shell
> variables in redirections:
>
>   % sh -c 'fd=1; echo hello >&$fd'
>   hello
>   % pa.sh -c 'fd=1; echo hello >&$fd'
>   [multiple pages of python line noise]
>   AttributeError: 'LP_union_node' object has no attribute 'narg'

I worked around this problem by changing the command to use eval instead.

Next, it chokes syntax like this:

  % sh -c 'case `(echo hello) 2>/dev/null` in *) echo yay ;; esac'
  yay
  % pa.sh -c 'case `(echo hello) 2>/dev/null` in *) echo yay ;; esac'
  /tmp/pash_5xGNxHR/tmpwgw5hrrs: line 1: echo hello 2>/dev/null :
syntax error in expression (error token is "hello 2>/dev/null ")

I changed the configure script to work around this problem.

It also seems to have some bizarre quoting bugs:

  % cat >test.sh <<'EOF'
if :; then var='"("'; fi
echo "$var"
EOF
  % sh test.sh
  "("
  % pa.sh test.sh
  /tmp/pash_sEonmPK/tmpcmsn6hyd: line 6: syntax error near unexpected token `('
  /tmp/pash_sEonmPK/tmpcmsn6hyd: line 6: ` { ( exit
"${pash_runtime_final_status}" ) ; } ; } ; } ; } ; }; then var=""("";
fi'

I worked around this problem in the configure script too.

At this point configure will start to run, but:

configure runs some commands like this:

  $SHELL script arguments

This fails when SHELL is pa.sh because an explicit "--" argument is
needed to disable pa.sh's option processing.  I worked around this
with the use of a wrapper script to invoke pa.sh.

configure runs config.guess with $CONFIG_SHELL, but this produces
garbage output (multiple lines of C code get printed to standard output)
when run under pa.sh, which ultimately fails when that garbage gets
passed to config.sub.  I worked around this problem by replacing
config.guess with a simple one liner.

Now configure runs to completion but the generated config.status script
crashes with more shell syntax problems.  I worked around this by
manually running config.status with a different shell.

The resulting config.h is correct but pa.sh took almost 1 minute to run
the configure script, about ten times longer than dash takes to run the
same script.  More than half of that time appears to be spent just
loading the program into pa.sh, before a single shell command is
actually executed.

Cheers,
  Nick



Re: Parallelization of shell scripts for 'configure' etc.

2022-06-14 Thread Nick Bowler
On 2022-06-14, Michael Orlitzky  wrote:
> On Mon, 2022-06-13 at 15:39 -0700, Paul Eggert wrote:
>>
>> I've wanted something like this for *years* (I assigned a simpler
>> version to my undergraduates but of course it was too much to expect
>> them to implement it) and I hope some sort of parallelization like this
>> can get into production with Bash at some point (or some other shell if
>> Bash can't use this idea).
>>
>
> It looks like PaSh itself was designed and built well. The authors use
> multiple test suites and PaSh is comparable to other shells in
> correctness. Ultimately you wouldn't want a runtime dependency on
> python in your /bin/sh, but as a first step... can PaSh run ./configure
> already?

The answer seems to be no.  For fun, I just tried pash-0.8 on a configure
script and the shell itself crashes immediately.

It was very difficult to install so I might not be smart enough to use it
and botched the installation.  But pash appears to not understand shell
variables in redirections:

  % sh -c 'fd=1; echo hello >&$fd'
  hello
  % pa.sh -c 'fd=1; echo hello >&$fd'
  [multiple pages of python line noise]
  AttributeError: 'LP_union_node' object has no attribute 'narg'

It crashes even if the command with such a redirection is not executed:

  % pa.sh -c 'fd=1; if false; then echo hello >&$fd; fi'
  [similar crash]

Cheers,
  Nick



Re: make check to test a few tests, not all

2022-05-19 Thread Nick Bowler
Hi,

On 2022-05-19, Mike Fulton  wrote:
> I am working through some bugs porting autoconf to z/OS and I'd like to run
> the testsuite with just a particular test (e.g. 318).
> For my scripts, my preference would be to do this via 'make check' instead
> of running the testsuite directly.

You can pass arguments to the testsuite via TESTSUITEFLAGS make variable.
For example, to run just test 318:

 % make check TESTSUITEFLAGS='318'

It looks like this fact is not clearly documented anywhere in the package.

It is mentioned in README-hacking but that file is available from git
only and is not distributed.

Hope that helps,
  Nick



Re: Detecting gated functions w/ AC_CHECK_FUNCS()

2022-05-02 Thread Nick Bowler
Hi,

On 2022-05-02, Philip Prindeville  wrote:
> I was wondering how to do discovery of functions like open_memstream() which
> is only exposed by  when compilation used -D_GNU_SOURCE or
> -D_POSIX_C_SOURCE=200809L.

The "normal way" is to just use the AC_USE_SYSTEM_EXTENSIONS macro
early in your configure script which simply tries to turn every extended
C library function on.  Then you can simply not care at all about these
macros that are only really important when you are writing a compiler
test suites.

That being said...

[...]
> How do I bracket a particular AC_CHECK_FUNCS() invocation with defines that
> might be critical?
>
> I tried using:
>
> save_CFLAGS="$CFLAGS"
> CFLAGS="CFLAGS -D_POSIX_C_SOURCE=200809L"

... the idea is right but this assignment has a typo: it is adding the
literal string "CFLAGS" to the compiler command line, probably causing
subsequent compiler invocation to fail (and the test will probably not
return usable results).

> AC_CHECK_FUNCS([open_memstream])
> CFLAGS="$save_CFLAGS"

Note that AC_CHECK_FUNCS caches the result so it cannot easily be used
in this manner if you want to check the same function multiple times
(e.g., to probe different settings for CPPFLAGS).  If this is a problem
you can use AC_LINK_IFELSE instead.

Hope that helps,
  Nick



Re: Wrong order of preprocessor and compiler flags

2022-03-24 Thread Nick Bowler
On 2022-03-24, Zack Weinberg  wrote:
> On Thu, Mar 24, 2022, at 11:13 AM, Nick Bowler wrote:
>> However, GNU coding standards state that CFLAGS should be the last
>> item on compilation commands, so it would appear that this is a case
>> where traditional "make" behaviour contrasts with GNU standards (which
>> Automake is following).
>
> Huh.  Is there a rationale given in the coding standard?  If not, do you
> have any idea who might remember the rationale?

The GNU standards just say this[1]:

  "Put CFLAGS last in the compilation command, after other variables
   containing compiler options, so the user can use CFLAGS to override
   the others."

When it comes to C(PP)FLAGS the concept of "overriding" options is a
bit hairy -- many C compiler options do not have direct methods to undo
their effects -- but whatever.

[1] https://www.gnu.org/prep/standards/standards.html#Command-Variables



Re: Wrong order of preprocessor and compiler flags

2022-03-24 Thread Nick Bowler
On 2022-03-23, Zack Weinberg  wrote:
> On Wed, Mar 23, 2022, at 11:31 AM, Evgeny Grin wrote:
>> I've found that everywhere in autoconf scripts flags are used like:
>> $CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >_MESSAGE_LOG_FD
>> while automake and libtool use flags in the other order:
>> $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS)
>> $(AM_CFLAGS) $(CFLAGS)
>
> I agree that this should be made consistent, but before we change
> anything, we need to check what the rules *built into GNU and BSD
> Make* do with CFLAGS and CPPFLAGS (and also CXXFLAGS, OBJCFLAGS, etc)
> because those are much much harder to get changed than anything in
> Automake or Autoconf, so we should aim to harmonize everything with
> them.

Practically all make implementations use the $(CFLAGS) $(CPPFLAGS)
ordering in their builtin .c.o inference rules.

However, GNU coding standards state that CFLAGS should be the last
item on compilation commands, so it would appear that this is a case
where traditional "make" behaviour contrasts with GNU standards (which
Automake is following).

Cheers,
  Nick



Re: portability of xargs

2022-02-15 Thread Nick Bowler
On 2022-02-14, Mike Frysinger  wrote:
> context: https://bugs.gnu.org/53340
>
> how portable is xargs ?  like, beyond POSIX, as autoconf & automake both
> support non-POSIX compliant systems.  i want to use it in its simplest
> form: `echo $var | xargs rm -f`.

As far as I can tell xargs was introduced in the original System V UNIX
(ca. 1983).  This utility subsequently made its way back into V10 UNIX
(ca. 1989) and subsequently 4.3BSD-Reno (ca. 1990) and from there to
basically everywhere.  The original implementation from System V
supports the "-x", "-l", "-i", "-t", "-e", "-s", "-n" and "-p" options.
Of these, POSIX only chose to standardize "-x", "-t", "-s", "-n" and
"-p" suggesting possible incompatibilities with other options.

HP-UX 11 xargs expects the last filename to be followed by a white-space
character, or it will be ignored:

  gnu% printf 'no blank at the end' | xargs printf '[%s]'; echo
  [no][blank][at][the][end]

  hpux11% printf 'no blank at the end' | xargs printf '[%s]'; echo
  [no][blank][at][the]

The HP-UX 11 behaviour is also observed on Ultrix 4.5, but not on
4.3BSD-Reno.  Since xargs input typically ends with a newline, this is
not a serious practical problem.

Cheers,
 Nick



Re: abs_top_srcdir broken?

2021-10-18 Thread Nick Bowler
On 2021-10-18, Sébastien Hinderer  wrote:
> Given the follwing configure.ac script:
>
> AC_INIT([demo], [demo], [0.1], [d...@demo.org])
> AC_MSG_NOTICE([abs_top_srcdir="$abs_top_srcdir"])
>
> [The] configure script produced by autoconf 2.69 prints:
>
> configure: abs_top_srcdir=""
>
> Is that an expected behaviour?

I don't know about "expected" but it appears to at least be longstanding
behavour.  This variable is substituted into output files but this is
done directly by config.status and is not itself available within
configure scripts.

I can't say the documentation is particularly clear on this.  In the
section that defines abs_top_srcdir et al[1]:

  "The preset variables which are available during config.status (see
   Configuration Actions[2]) may also be used during configure tests."

If we interpret that as meaning "the variables that are both in this
list and in that other list[3]" then there is just one -- srcdir -- and
this one does indeed work as expected in configure scripts.  But this is
not a really obvious interpretation and there are many other variables
in the list (like CFLAGS) that are routinely used in configure tests.

Configure scripts are always executed from the top build directory so
most of these directory variables are not needed in tests.  If required,
it is easy enough to compute absolute directories in the shell:

  case $srcdir in
  /*) my_abs_top_srcdir=$srcdir ;;
  *) my_abs_top_srcdir=`pwd`/$srcdir ;;
  esac

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/html_node/Preset-Output-Variables.html
[2] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/html_node/Configuration-Actions.html
[3] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/html_node/Configuration-Actions.html#index-srcdir-1

Cheers,
  Nick



Re: Description not expanded in a call to AC_DEFINE

2021-08-31 Thread Nick Bowler
On 2021-08-31, Sébastien Hinderer  wrote:
> Many thanks for your prompt and helpful response!
>
> So in my case, the file in question is called "version.h", so am I thus
> correct that the description mechanism will not work and that I better
> put the comments, if any, in version.h.in?

Yes, if you're not using autoheader you can just manually put comments
in the version.h.in template file (these should not be on the same line
as the directives which are substituted by configure) and they will be
copied into the output version.h.

Cheers,
  Nick



Re: Description not expanded in a call to AC_DEFINE

2021-08-31 Thread Nick Bowler
Hi Sébastien,

On 2021-08-31, Sébastien Hinderer  wrote:
[...]
> As the next step, I wanted to add a description as the third argument,
> hoping to find this description in the generated header file as
> documented, but that does not work. I did cleanupu the project to make
> sure the ancient generated header is deleted, then I did run autoconf
> and configure again, but the produced header is exactly as if I didn't
> add a third argument to my macro.
>
> The macro call looks like this:
>
> AC_DEFINE([PKG_VERSION_MAJOR], [PKG__VERSION_MAJOR],
>   [The major number of the current PKG version])

The third argument to AC_DEFINE only has any effect when autoheader
is used to generate config.h.in.

So if you just "run autoconf and configure again" that won't
be sufficient to do anything, you also need to run autoheader
to regenerate config.h.in with the new descriptions.

Cheers,
  Nick



Re: m4 macro expansion problem

2021-08-23 Thread Nick Bowler
On 2021-08-23, Nick Bowler  wrote:
> However, you might not notice that this text went unexpanded in
> Autoconf (which is in effect while processing aclocal.m4) is KILL,
> so all resulting text is simply discarded.

Erm, I appear to have accidentally some words...

I meant to say "the default diversion in Autoconf (which is in effect
while processing aclocal.m4) is KILL ..."

Cheers,
  Nick



Re: m4 macro expansion problem

2021-08-23 Thread Nick Bowler
On 2021-08-23, Sébastien Hinderer  wrote:
> I would like to express all this in m4, more precisely in aclocal.m4, so
> that the configure script has proper version information.
>
> At the moment I don't manage to do so and failed to find an exemple of a
> macro whose body can take several lines but with the spaces at the begin
> of the lines (except the first one) being ignored. So I resorted to
> format but I didn't manage to make that work either and it any way feels
> too complex to be correct.
>
> Here is how the code looks like at the moment:
>
> # Package version
> define(`PKG_VERSION_MAJOR', `4')dnl
> define(`PKG_VERSION_MINOR', `14')dnl
> define(`PKG_VERSION_PATCH_LEVEL', `0')dnl
> define(`PKG_VERSION_EXTRA', `dev0-2021-06-03')dnl
> define(`PKG_VERSION_EXTRA_PREFIX', `+')dnl could also be `~'
> define(`PKG_VERSION',
>   ``format(dnl
> PKG_VERSION_MAJOR.PKG_VERSION_MINOR`%s%s',dnl
> ifelse(PKG_VERSION_PATCH_LEVEL,`',`',.PKG_VERSION_PATCH_LEVEL),dnl
> ifelse(PKG_VERSION_PATCH_LEVEL,`',`',dnl
>   PKG_VERSION_EXTRA_PREFIX`'PKG_VERSION_EXTRA)dnl
>   )''dnl
> )dnl

First, a couple things:

  - Autoconf renames most builtin M4 macros to have an m4_ prefix.  So
in the context of aclocal.m4, you must use m4_define and m4_format.
(ifelse and dnl are not renamed, but m4_if is defined as an alias
for ifelse).

  - Autoconf changes the M4 open quotation mark to [ and the close
quotation mark to ].  The ` and ' characters cease to have
special meaning.

So almost certainly none of the text you wrote in aclocal.m4
does anything, because 'define' will not be recognized as a macro
in Autoconf.  However, you might not notice that this text went
unexpanded in Autoconf (which is in effect while processing
aclocal.m4) is KILL, so all resulting text is simply discarded.

(For this reason it is uncommon to use "dnl" at the top level here to
suppress unwanted blank lines in the output, as they will not be output
in any case).

Now, it is important to have a basic understand of how M4 collects
arguments.  M4 roughly works like this:

  - When the ( that begins a macro argument list is identified, the
first thing that M4 does is delete all subsequent whitespace
characters until it finds the first non-whitespace character.

  - Then, M4 will look for the (unquoted) , or ) characters that
indicate the end of this argument: while it does this it expands any
unquoted macros found and this procedure continues until such an
unquoted comma or closing parenthesis is found (or the end of the
input, which would cause M4 to fail with an error).

  - This process repeats (including the removal of whitespace) for each
macro argument until all arguments have been collected, quotation is
deleted (just one level) and the macro body is substituted.

Next, you should note that "dnl" is not very much like a comment you
might find in other languages.  It is a macro, and its side effect of
deleting input text occurs only when it is expanded.

This can cause some surprising results when 'dnl' is used in macro
arguments.  In your example, its use will interfere with the usual
removal of whitespace at the beginning of the arguments to 'format'.

Now you should hopefully have enough information to fix your macro
definition (bonus hint: it also looks overquoted).

Finally, I will note that Autoconf provides the macro m4_do which
simply expands to the text of all of its arguments.  And m4_join
does the same with a separator between the arguments.  Since M4
deletes whitespace at the beginning of each argument, these macros
can be useful for code formatting purposes.

Hope that helps!

Cheers,
  Nick



Re: [sr #110212] Transform pkgdatadir using program-prefix and -suffix

2021-07-28 Thread Nick Bowler
On 2021-07-27, Eric Siegerman  wrote:
> Follow-up Comment #4, sr #110212 (project autoconf):
>
> 0 Edit _configure.ac_: In the _AC_INIT()_ call, change the first argument
> to
> the name you want to use for the subdirectory
> 0 Run _autoconf_ (perhaps indirectly, if the package provides an
> _autogen.sh_
> script or the like)
> 0 Run _configure_ with _--program-prefix_, _--program-suffix_, and/or
> _--program-transform-name_, as desired
> 0 Once the build is finished, do a test install into a scratch location
> using
> _DESTDIR=/where/ever_, and look it over to make sure all the pathnames that
> needed to be changed, were in fact changed.

Instead of all this hackery you can just set pkgdatadir when you build the
program, surely that's easier.  For example:

  % make clean
  % make pkgdatadir=/wherever/you/want install

This is expected to work with all build systems that follow the GNU
coding standards.

Cheers,
  Nick



Re: Running autoconf and autoreconf without autotools in the path

2021-07-20 Thread Nick Bowler
On 2021-07-20, Christopher O Cowan  wrote:
>> On Jul 19, 2021, at 2:05 PM, Christopher O Cowan 
>>  wrote:
>> Just curious if there is a feature within autotools to allow me run
>> autoconf and similar utilities via an absolute path, without the autotools
>> suite commands, in the PATH.  Maybe this already exists, and I just
>> haven’t stumbled across it?

You should be able to configure and install autoconf with whatever
prefix setting you want and just run it from there.  What specific
problem are you having?

[...]
> So, looking at closely at the autoconf package, it seems autoreconf
> and autoheader (both written in perl), have this feature, for one or
> more of the ENV vars that I would expect.

Yes, if you are using the autoreconf helper script, the name of
all the tools it runs can be controlled by environment variables.
This is described in the manual[1].

Since autoreconf calls external tools that are not part of the autoconf
package, it cannot know in advance where these are installed so yes, you
will have to tell it if the result of a PATH lookup is not correct.

> Autoconf on the other hand, seems to only check for AUTO4MATE,

For autoconf, I think this should be all that matters?  Anyway for
autoconf you should not normally need to set anything: autoconf knows
where it was installed and will run the correct autom4te out of the box.

> a cursory check of aclocal shows it isn’t checking for any of these.

aclocal respects AUTOM4TE as well (note the spelling).  Furthermore,
when you configure automake it should embed the name used during
configuration into the installed script, so you should not normally
need to set anything except when you first install Automake.

I would say that using aclocal is probably the hardest part about
installing in a nonstandard prefix, because one of its jobs is to
pick up external macros installed by other packages.  If those are
installed with different prefixes aclocal will not find them.  But
you can make use of the 'dirlist' feature[2] to augment the search
path and make it more usable.

Note that since aclocal is part of Automake, not Autoconf, further
questions about it should be directed to the Automake list


[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/autoconf.html#autoreconf-Invocation
[2] https://www.gnu.org/software/automake/manual/automake.html#Macro-Search-Path

Hope that helps,
  Nick



Re: autoupdate produces changes for the current ax_* macros from the archive

2021-05-19 Thread Nick Bowler
Hi Dima,

On 2021-05-19, dima.pasech...@cs.ox.ac.uk  wrote:
> what is the procedure for fixing autoconf-2.71-incompatible macros in
> the autoconf archive?
[...]
> For instance, autoupdate provides replacements for
> AC_WARNING in ax_compare_version.m4 from
> https://www.gnu.org/software/autoconf-archive/ax_compare_version.html

AC_WARNING has not been removed.

The only change is now you get a warning (with -Wobsolete) and autoupdate
will suggest to change it.  This macro has been deprecated[1] since
Autoconf 2.62 (ca. 2008).

Is there some problem with the replacement suggested by autoupdate?

> Also, there are few weird replacements for aliases generated, e.g.
> -AU_ALIAS([CHECK_SSL], [AX_CHECK_OPENSSL])
> +AU_ALIAS([AX_CHECK_OPENSSL], [AX_CHECK_OPENSSL])
> in ax_check_openssl.m4
> https://www.gnu.org/software/autoconf-archive/ax_check_openssl.html

This is a known problem with autoupdate[2].  It doesn't actually
understand m4 syntax so it often runs into problems similar this one.

As a workaround, you can tweak the quoting to avoid such counterproductive
suggestions in the future; for example:

  AU_ALIAS([CHECK_][OPENSSL], [AX_CHECK_OPENSSL])

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.62/autoconf.html#index-AC_005fWARNING-1271
[2] https://lists.gnu.org/archive/html/bug-autoconf/2021-01/msg00023.html



Re: Force 32 bit build

2021-05-06 Thread Nick Bowler
On 2021-05-06, aotto  wrote:
> I want to write a "autoconf/automake" script for an application ONLY for
> 32 bit on 64 bit Linux.
> This meant that the default for configure must be 32 bit and nothing else.
>
> I know that a user can do "configure CC="gcc -m32"... etc but this is
> NOT what I want.
> I want that the 'configure' script set the 32bin once at start and fix.

The general approach I would take to doing this in a configure script is
something like this:

Step 1)
Figure out how you are going to determine whether the compiler and
linker are producing the desired output format.  This should be done
in a portable way.

Step 2)
Use AC_LINK_IFELSE to compile a test program.  In the "action-if-true"
branch, check whether the linker output (which will be in the file
called conftest$EXEEXT) is as expected.  If that passes, you know that
no special compiler options are required.

Step 3)
If your verification fails, you can then temporarily alter CFLAGS and/or
LDFLAGS to test whatever combinations you think might work, then repeat
the AC_LINK_IFELSE and verification procedures to see if they actually
did work.

Step 4)
If a working method was found, then use AC_SUBST to substitute
appropriate variables and use them in your build scripts.

If no working method was found, use AC_MSG_FAILURE to report the error.

I will add that for this purpose you may find that using AC_COMPUTE_INT
(rather than AC_LINK_IFELSE) is more convenient when implementing your
configure test.  The overall structure is the same, however.

Hope that helps,
  Nick



Re: Latest M4 fails M4 checks

2021-04-20 Thread Nick Bowler
On 2021-04-20, Jeffrey Walton  wrote:
> On Tue, Apr 20, 2021 at 7:00 AM Jeffrey Walton  wrote:
>>
>> I'm working on an Apple Mac-mini M1. I installed M4 1.4.18 in /usr/local.
>>
>> % /usr/local/bin/m4 --version
>> zsh: abort  /usr/local/bin/m4 --version
>> % /usr/local/bin/m4 -V
>> zsh: abort  /usr/local/bin/m4 -V
>> 
[...]
> % sudo lldb /usr/local/bin/m4
> (lldb) target create "/usr/local/bin/m4"
> Current executable set to '/usr/local/bin/m4' (arm64).
> (lldb) r --version
> Process 36645 launched: '/usr/local/bin/m4' (arm64)
> Process 36645 stopped
> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
> frame #0: 0x0001a32f1130 libsystem_kernel.dylib`__abort_with_payload
> + 8
> libsystem_kernel.dylib`__abort_with_payload:
> ->  0x1a32f1130 <+8>:  b.lo   0x1a32f1150   ; <+40>
> 0x1a32f1134 <+12>: pacibsp
> 0x1a32f1138 <+16>: stpx29, x30, [sp, #-0x10]!
> 0x1a32f113c <+20>: movx29, sp
> Target 0: (m4) stopped.
> (lldb) bt
> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
>   * frame #0: 0x0001a32f1130 libsystem_kernel.dylib`__abort_with_payload
> + 8
> frame #1: 0x0001a32f3a20
> libsystem_kernel.dylib`abort_with_payload_wrapper_internal + 104
> frame #2: 0x0001a32f3a54 libsystem_kernel.dylib`abort_with_payload +
> 16
> frame #3: 0x0001a3248864 libsystem_c.dylib`_os_crash_fmt.cold.1 +
> 80
> frame #4: 0x0001a31e3e74 libsystem_c.dylib`_os_crash_fmt + 164
> frame #5: 0x0001a3216aa4 libsystem_c.dylib`__vfprintf + 11604
> frame #6: 0x0001a323896c libsystem_c.dylib`__v2printf + 404
> frame #7: 0x0001a321df00 libsystem_c.dylib`_vsnprintf + 264
> frame #8: 0x0001a3212c58 libsystem_c.dylib`snprintf + 72
> frame #9: 0x00010002ae08 m4`vasnprintf + 1644
> frame #10: 0x00010002b34c m4`rpl_vasprintf + 40
> frame #11: 0x000100017ee8 m4`xvasprintf + 168
> frame #12: 0x000100017fac m4`xasprintf + 28
> frame #13: 0x00012ff0 m4`main + 100
> frame #14: 0x0001a331df34 libdyld.dylib`start + 4

This is clearly not an Autoconf problem.

GNU M4 has its own mailing list for bug reports[1].  I suggest sending
your bug report there: bug...@gnu.org

[1] https://lists.gnu.org/mailman/listinfo/bug-m4

Cheers,
  Nick



Re: autoconf-2.71: declaration ordering problem: ac_fn_c_try_run () is defined, but after attempted use

2021-04-01 Thread Nick Bowler
On 2021-04-01, Ondrej Dubaj  wrote:
> experiencing configure problem
>
> ./configure: line 18777: ac_fn_c_try_run: command not found
>
> It seems that CMU_HAVE_OPENSSL brings CMU_FIND_LIB_SUBDIR brings
> AC_CHECK_SIZEOF(long) ... that brings the use of *check_int and that
> brings the use of ac_fn_c_try_run ... but it doesn't bring the macro
> defining it.

These sort of issues are typically caused by underquoting, probably
within the definition of CMU_HAVE_OPENSSL and/or CMU_FIND_LIB_SUBDIR.

Due to technical limitations of m4, AC_REQUIRE does not work properly
when it is expanded during argument collection.  This is not usually
a problem when arguments are quoted properly: one of the reasons
why proper quoting is so important.

> IMO there's something wrong with calling AC_TRY_RUN in AC_CACHE_VAL.

If you could share the code in question, I can give more specific
comments.  But my totally wild guess is that the second argument to
AC_CACHE_VAL is not quoted.

Cheers,
  Nick



Re: config.sub/config.guess using nonportable $(...) substitutions

2021-03-09 Thread Nick Bowler
On 09/03/2021, Warren Young  wrote:
> On Mar 9, 2021, at 1:26 PM, Paul Eggert  wrote:
>>
>>> 1) There is no actual benefit to using $(...) over `...`.
>>
>> I disagree with that statement on technical grounds (not merely cosmetic
>> grounds), as I've run into real problems in using `...` along with " and
>> \,
>
> Me too, plus nesting.  The difference is most definitely not cosmetic.

I think what Karl means is that it is usually very easy to portably work
around the problems of nested and/or quoted `...` substitutions (usually
by just using a variable).

In other words, the difference between a script using $(...) and an
equivalent, more portable script using `...` is only one of appearance.

Regardless, there are no quoted or nested substitutions whatsoever in
config.sub.  I see exactly one nested substitution in config.guess, and
just a handful of quoted ones.  None appear particularly challenging to
write portably.

> Autoconf came out in 1991, so it’s the equivalent of supporting Version 6
> Unix (1975) in the original release, which it probably didn’t do, given that
> the Bourne shell didn’t even exist at that point.
>
> Are the malcontents not expecting heroic levels of backwards compatibility
> that Autoconf never has delivered?

No, I'm just expecting that things are not broken gratuitously in core
portability tools because someone does not like the appearance of the
more portable syntax.

I _especially_ don't expect this kind of breakage when upgrading from one
Automake point release to another (1.16.1 to 1.16.3).

Cheers,
  Nick



Re: config.sub/config.guess using nonportable $(...) substitutions

2021-03-08 Thread Nick Bowler
On 2021-03-08, Tim Rice  wrote:
> On Mon, 8 Mar 2021, Nick Bowler wrote:
[...]
>> These scripts using $(...) are incorporated into the recently-released
>> Automake 1.16.3, which means they get copied into packages bootstrapped
>> with this version.  So now, if I create a package using the latest bits,
>> configuring with heirloom-sh fails:
>>
>>   % CONFIG_SHELL=/bin/jsh jsh ./configure CONFIG_SHELL=/bin/jsh
>>   configure: error: cannot run /bin/jsh ./config.sub
>
> But why would you use CONFIG_SHELL= to specify a less capable shell?
> It is there to specify a more capable shell in case it is not already
> detected.

It is simply a proxy to test Solaris /bin/sh behaviour using a modern
GNU/Linux system.  This is much easier and faster than actually testing
on old Solaris systems and, more importantly, anyone can download and
install this shell as it is free software and reasonably portable.

Obviously I can successfully run my scripts on GNU/Linux using a modern
shell such as GNU Bash.  But that's not the point: Autoconf and friends
are first and foremost portability tools.  For me the goal is that this
should be working anywhere that anyone might reasonably want to run it.

But right now, it seems these portability tools are actually *causing*
portability problems, rather than solving them.  From my point of view
this is a not so great situation.

Cheers,
  Nick



[PATCH] autotest: Avoid nonportable : redirections in functions.

2021-03-08 Thread Nick Bowler
Using : with redirections in a shell function is one of the nonportable
constructs discussed in the Autoconf manual; §11.4 "File Descriptors":

  Solaris 10 sh will try to optimize away a : command (even if it is
  redirected) ... in a shell function after the first call:
  [...]
  $ f () { : >$1; }; f y1; f y2; f y3;
  $ ls y*
  y1

Nevertheless, autotest-generated testsuites use exactly this sort of
redirection when preparing the environment for AT_CHECK.  The result,
when using such a shell, is that the stdout and stderr files do not
get truncated between multiple AT_CHECK invocations in a test group:
instead, each AT_CHECK command appends to the output of the previous
one.  This obviously does not end well.

The manual suggests using eval to work around this limitation, so
let's do just that.

* lib/autotest/general.m4 (at_fn_check_prepare_[no]trace): Work around
failure of : redirections in some shells.
---
 lib/autotest/general.m4 | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/autotest/general.m4 b/lib/autotest/general.m4
index 40c92d93..3e2bf510 100644
--- a/lib/autotest/general.m4
+++ b/lib/autotest/general.m4
@@ -260,7 +260,7 @@ at_fn_check_prepare_notrace ()
   $at_trace_echo "Not enabling shell tracing (command contains $[1])"
   AS_ECHO(["$[2]"]) >"$at_check_line_file"
   at_check_trace=: at_check_filter=:
-  : >"$at_stdout"; : >"$at_stderr"
+  eval ': >"$at_stdout"; : >"$at_stderr"'
 }
 
 AS_FUNCTION_DESCRIBE([at_fn_check_prepare_trace], [LINE],
@@ -270,7 +270,7 @@ at_fn_check_prepare_trace ()
 {
   AS_ECHO(["$[1]"]) >"$at_check_line_file"
   at_check_trace=$at_traceon at_check_filter=$at_check_filter_trace
-  : >"$at_stdout"; : >"$at_stderr"
+  eval ': >"$at_stdout"; : >"$at_stderr"'
 }
 
 AS_FUNCTION_DESCRIBE([at_fn_check_prepare_dynamic], [COMMAND LINE],
-- 
2.26.2




config.sub/config.guess using nonportable $(...) substitutions

2021-03-08 Thread Nick Bowler
Hi,

I noticed that config.sub (and config.guess) scripts were very recently
changed to use the POSIX $(...) form for command substitutions.

This change is, I fear, ill-advised.  The POSIX construction is
widely understood to be nonportable as it is not supported by
traditional Bourne shells such as, for example, Solaris 10 /bin/sh.
This specific portability problem is discussed in the Autoconf manual
for portable shell programming[1].

These scripts using $(...) are incorporated into the recently-released
Automake 1.16.3, which means they get copied into packages bootstrapped
with this version.  So now, if I create a package using the latest bits,
configuring with heirloom-sh fails:

  % CONFIG_SHELL=/bin/jsh jsh ./configure CONFIG_SHELL=/bin/jsh
  configure: error: cannot run /bin/jsh ./config.sub

  % jsh config.sub x86_64-pc-linux-gnu
  config.sub: syntax error at line 53: `me=$' unexpected

(The heirloom-sh is essentially Solaris /bin/sh but runs on GNU/Linux systems).

What was the motivation for this change?  Backquotes work fine and are
more portable.  Can we just revert it so the script works again with
traditional shells?  Surely these scripts should be maximally portable,
I would think?

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/autoconf.html#index-_0024_0028commands_0029

Cheers,
  Nick



Re: RFC: Bump minimum Perl to 5.18.0 for next major release of both Autoconf and Automake

2021-02-18 Thread Nick Bowler
On 2021-02-18, Karl Berry  wrote:
> I think the right thresholds are 5.10 for absolute minimum and 5.16
> for 'we aren't going to test with anything older than this'
>
> I appreciate the effort to increase compatibility with old versions.
>
> I imagine you could provide Digest::SHA "internally", or test for it as
> Nick suggested, but I know how much of a pain it is to avoid/check for
> use of things that have seemingly been around forever. (Comes up all the
> time in the TeX world.)

Just to clarify, I was not suggesting that any kind of test is needed
before going ahead and using this module.  If there is a good reason
to use the module in Autoconf, as far as I'm concerned we should just
go ahead and use it.

I was just pointing out that requiring this module in Autoconf does not,
by itself, imply requiring perl 5.10, as the module may be available on
older installations too.

The reason for failures due to a missing module like this will be
obvious immediately.  A configure test may be _nice_ but probably
just extra work that is not really needed.

Cheers,
  Nick



Re: RFC: Bump minimum Perl to 5.18.0 for next major release of both Autoconf and Automake

2021-02-18 Thread Nick Bowler
Hi Zack,

On 2021-02-17, Zack Weinberg  wrote:
> On Fri, Jan 29, 2021 at 5:54 PM Karl Berry  wrote:
>> But, I think it would be wise to give users a way to override the
>> requirement, of course with the caveat "don't blame us if it doesn't
>> work", unless there are true requirements such that nothing at all would
>> work without 5.18.0 -- which seems unlikely (and undesirable, IMHO).
>> 2013 is not that long ago, in autotime.
>
> This is a reasonable suggestion but Perl makes it difficult.
[...]
> What we could do is something like this instead:
>
>use 5.008;  # absolute minimum requirement
>use if $] >= 5.016, feature => ':5.16';  # enable a number of
> desirable features from newer perls
>
> + documentation that we're only _testing_ with the newer perls.

FWIW, I just checked and I do currently build an Autotest testsuite
on a system where "perl" is perl 5.8.3, which works on autoconf-2.69.

So I suppose if Autoconf required a newer version, and I required a
newer version of Autoconf, then this is a problem.  But due to the
nature of Autoconf this is exclusively my problem and does not impact
downstream users at all.  So I'd just solve the problem (perhaps by
running autom4te on an updated setup) and wouldn't be bothered if
things are broken for a reason.

Only testing with new(ish) perl versions is not at all a problem IMO.
Interoperability is always "best effort": nobody can test every possible
system configuration.  As long as we don't claim to support systems
that are never ever tested, people who care about particular systems
just have to speak up when things stop working.

> I did some more research on perl's version history (notes at end) and
> I think the right thresholds are 5.10 for absolute minimum and 5.16
> for 'we aren't going to test with anything older than this'.  5.10 is
> the oldest perl that shipped Digest::SHA, which I have a specific need
> for in autom4te;

... on the topic of of reasons to break things, the perl 5.8 installation
in question does seem to have Digest::SHA available to it.  So for this
dependency I would suggest Autoconf should be following the Autoconf
philosophy and "you must have the Digest::SHA perl module" is different
from "you must have perl version 5.10 or newer".

> it is also the oldest perl to support `state` variables and the `//`
> operator, both of which could be quite useful.

However these new syntactic constructs are obviously unavailable.
I think "//" is not a great reason (by itself) to break compatibility
but "state" could be.

Cheers,
  Nick



Re: Weird behaviour about system types

2021-02-04 Thread Nick Bowler
Hi Sébastien,

On 2021-02-04, Sébastien Hinderer  wrote:
> Actually I find it odd that Debian installs cross-compilers under names
> that do not have the canonical system type as prefix.
>
> I can evenseee this line in my configure script:
>
> test -n "$host_alias" && ac_tool_prefix=$host_alias-
>
> So the computation of ac_tool_prefix does actually rely on host_alias,
> rather than host, which I find surprising.

The purpose of canonicalization is not to find the toolchain.  The user
specifies the actual name of the toolchain via the command-line options.
Since your toolchain is installed with a non-canonical prefix if the
canonicalized name was used configure would not find the toolchain!

The reason to use the canonicalized names is for the scenario when you
want to write conditional code based on $host_os, $host_cpu, etc.

If you are not writing code based on these split-out variables, it is
probably not needed to use AC_CANONICAL_xxx macros.

Incidentally, I think the current description of the AC_xxx_TARGET_TOOL(S)
macros in the Autoconf manual is wrong.  All of these macros appear to be
searching based on the $target_alias (as is sensible) rather than the
canonical target name as stated in the documentation.  However they all
seem to mishandle the case where target_alias is empty (because the user
did not specify --target), oops.

These macros pull in AC_CANONICAL_TARGET as a dependency but I don't
see why they bother as they not appear to make use of the target_cpu,
target_vendor or target_os variables at all.

(AC_CHECK_TOOL and friends do not have these problems).

Cheers,
  Nick



Re: Weird behaviour about system types

2021-02-04 Thread Nick Bowler
Hi Sébastien,

On 2021-02-04, Sébastien Hinderer  wrote:
[...]
> I am calling the generated configure script as follows:
>
>   ./configure --build=x86_64-pc-linux-gnu --host=aarch64-linux-gnu
>
> I am getting the following output:
>
> checking build system type... x86_64-pc-linux-gnu
> checking host system type... aarch64-unknown-linux-gnu
> checking target system type... aarch64-unknown-linux-gnu
>
> I am okay with the first line but to absolutely not understandthe second
> and third line. Why do the host and system types contain an "unknown"
> string?
>
> Since I gave the --host=aarch64-linux-gnu arguemnt, I expected this to
> be the canonical system type. And also, I thought the target system type
> would default to the user-provided host system type, which does not seem
> to be the case.

The system type aarch64-linux-gnu is not in canonical form.  A canonical
system type has three parts: CPU, vendor and OS.

In this case you have specified the host CPU (x86_64 / aarch64) and OS
(linux-gnu), but not the host vendor.  So the canonicalization process
sets the vendor to "unknown" (this transformation is performed by
config.sub).

> Should I then rely on the host_alias and target_alias variables? It
> feels odd because they may not be in canonical form and also this means
> that the target will have to be explicitly given, which I thought I
> don't have to do.

Without knowing what your goals are, I cannot make any recommendation.

Cheers,
  Nick



Re: Automake's file locking

2021-02-03 Thread Nick Bowler
On 2021-02-03, Bob Friesenhahn  wrote:
> GNU make does have a way to declare that a target (or multiple
> targets) is not safe for parallel use.  This is done via a
> '.NOTPARALLEL: target' type declaration.

According to the manual[1], prerequisites on .NOTPARALLEL target are
ignored and this will simply disable parallel builds completely for
the entire Makefile.  I did a quick test and the manual seems to be
accurate about this.

Order-only prerequisites can be used to prevent GNU make from running
specific rules in parallel.  These are more difficult (but not impossible)
to declare in an interoperable way.

[1] https://www.gnu.org/software/make/manual/make.html#index-_002eNOTPARALLEL

Cheers,
  Nick



Re: couple notes about post-2.71 branch management

2021-02-02 Thread Nick Bowler
Hi Zack,

On 2021-01-29, Zack Weinberg  wrote:
> Finally, to help us keep the development series on branch and trunk
> straight, I tagged the "post-release administrivia" commit on the
> trunk as v2.72a and on the branch as v2.72b.  This will make the
> output of autoconf --version be obviously different between a build
> from the trunk and a build from the branch. Think of them as arbitrary
> labels (but m4_version_compare greater than 2.71 and less than 2.72).
> I don't plan to do releases with either version number.

I just pulled the latest autoconf master, and it seems master branch was
tagged v2.62a[1], rather than v2.72a as described here.

The result is slightly confusing :)

[1] https://git.savannah.gnu.org/gitweb/?p=autoconf.git;a=tag;h=refs/tags/v2.62a

Cheers,
  Nick



Re: version string comparison

2021-01-30 Thread Nick Bowler
On 2021-01-30, Thien-Thi Nguyen  wrote:
> In GNUnet configure.ac, there is the fragment:
>
>  # test for libunistring
>  gl_LIBUNISTRING
>  AS_IF([test "x$gl_libunistring_hexversion" = "x" || test
> "$gl_libunistring_hexversion" -le 2305],
>[AC_MSG_ERROR([GNUnet requires libunistring >= 0.9.1.1])])
>
> that uses the var ‘gl_libunistring_hexversion’ which is
> undocumented (IIUC).  OTOH, var ‘LIBUNISTRING_VERSION’ is
> indeed documented, and has value something like "0.9.10".
>
> Is there any Autoconf support for comparing these two version
> strings: "0.9.1.1" and "0.9.10"?

I think AS_VERSION_COMPARE[1] should do the trick?

[1] 
https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.70/autoconf.html#index-AS_005fVERSION_005fCOMPARE

Cheers,
  Nick



Re: Automake's file locking (was Re: Autoconf/Automake is not using version from AC_INIT)

2021-01-28 Thread Nick Bowler
On 2021-01-28, Zack Weinberg  wrote:
> There is a potential way forward here.  The *only* place in all of
> Autoconf and Automake where XFile::lock is used, is by autom4te, to
> take an exclusive lock on the entire contents of autom4te.cache.
> For this, open-file locks are overkill; we could instead use the
> battle-tested technique used by Emacs: symlink sentinels.  (See
> https://git.savannah.gnu.org/cgit/emacs.git/tree/src/filelock.c .)
>
> The main reason I can think of, not to do this, is that it would make
> the locking strategy incompatible with that used by older autom4te;
> this could come up, for instance, if you’ve got your source directory
> on NFS and you’re building on two different clients in two different
> build directories.  On the other hand, this kind of version skew is
> going to cause problems anyway when they fight over who gets to write
> generated scripts to the source directory, so maybe it would be ok to
> declare “don’t do that” and move on.  What do others think?

I think it's reasonable to expect concurrent builds running on different
hosts to work if and only if they are in different build directories and
no rules modify anything in srcdir.  Otherwise "don't do that."

If I understand correctly the issue at hand is multiple concurrent
rebuild rules, from a single parallel make implementation, are each
invoking autom4te concurrently and since file locking didn't work,
they clobber each other and things go wrong.

I believe mkdir is the most portable mechanism to achieve "test and set"
type semantics at the filesystem level.  I believe this works everywhere,
even on old versions of NFS that don't support O_EXCL, and on filesystems
like FAT that don't support any kind of link.

The challenge with alternate filesystem locking methods compared to
proper file locks is that you need a way to recover when your program
dies before it can clean up its lock files or directories.

Could the issue be fixed by just serializing the rebuild rules within
make?  This might be way easier to do.  For example, we can easily
do it in NetBSD make:

  all: recover-rule1 recover-rule2
  clean:
rm -f recover-rule1 recover-rule2

  recover-rule1 recover-rule2:
@echo start $@; sleep 5; :>$@; echo end $@

  .ORDER: recover-rule1 recover-rule2

Heirloom make has a very similar mechanism that does not guarantee
relative order:

  .MUTEX: recover-rule1 recover-rule2

Both of these will ensure the two rules are not run concurrently by a
single parallel make invocation.

GNU make has order-only prerequisites.  Unlike the prior methods, this
is trickier to do without breaking other makes, but I have used a method
like this one with success:

  # goal here is to get rule1_seq set to empty string on non-GNU makes
  features = $(.FEATURES) # workaround problem with old FreeBSD make
  orderonly = $(findstring order-only,$(features))
  rule1_seq = $(orderonly:order-only=|recover-rule1)

  recover-rule2: $(rule1_seq)

I don't have experience with parallel builds using other makes.

Cheers,
  Nick



Re: autoupdate: AU_ALIAS shouldn't affect itself

2021-01-26 Thread Nick Bowler
On 2021-01-26, egall--- via Bug reports for autoconf
 wrote:
> Say I have an M4 macro file with an AU_ALIAS usage in it like this:
>
> AU_ALIAS([BNV_HAVE_QT], [AX_HAVE_QT])
>
> If I run autoupdate on this file, that will become:
>
> AU_ALIAS([AX_HAVE_QT], [AX_HAVE_QT])
>
> This seems pointless, as now the AU_ALIAS won't do what it was originally
> meant to do anymore. Perhaps autoupdate could be updated to stop making
> this change.

It does seem counterproductive.  Unfortunately autoupdate doesn't actually
understand m4 syntax so it often runs into problems similar this one.  I'm
sure it could be improved to better handle this specific case, though.

But for the same reason, you should be able to work around the problem
just by quoting differently, for example:

  AU_ALIAS([BNV_][HAVE_QT], [AX_HAVE_QT])

Cheers,
  Nick



Re: Future plans for Autotools

2021-01-25 Thread Nick Bowler
On 2021-01-25, John Calcote  wrote:
> On Mon, Jan 25, 2021 at 12:26 PM Nick Bowler  wrote:
>> On 2021-01-25, Zack Weinberg  wrote:
>> > I'm not at all familiar with Automake's internals, but the reason I
>> > suggested taking advantage of GNU make extensions was the potential
>> > for _complexity_ reduction of the generated Makefile, not performance.
>> > For instance, this generated rule from one of my other projects [...]
>>
>> To be honest if Automake-generated Makefile.in files only worked
>> for users with, say, sufficiently modern versions of GNU Make, I'm
>> not sure there would be any point in using Automake.
>
> I'm not sure I see your point Nick. Why use Automake? Because I'd much
> rather write (and maintain) two lines of automake code than even a single
> page of GNU make code.

I'm trying to say that if you are going to force users to use GNU make
anyway then then I think most if not all of Automake's features would be
more effectively implemented by one or more "include"-able GNU make
snippets rather than using a standalone perl-based preprocessor stage
like Automake that introduces its own unique set of problems.

This approach is very typically used in the BSD world, where the build
environment is centered around one specific make implementation and
everyone shares the same set of common build recipes.

But for me, I want my packages to be widely portable and out-of-the-box
compatibility with default "make" implementations, to the greatest
extent possible, on a wide variety of real-world platforms is important.
I personally don't want to ask users of non-GNU systems to install GNU
make just because the Makefile would be slightly easier to write.  Today,
I use Automake to help me achieve this goal.  If a new version of
Automake were to make that impossible, because its own rules will not
run on other makes, then I suppose I would not be using that version.

This doesn't mean everything needs to work _perfectly_ on every make,
but I expect at least "./configure --whatever-options && make install"
to work everywhere, and for incremental rebuilds to work contingent on
functional dependency tracking (which in practice is almost everywhere).

Cheers,
  Nick



Re: Future plans for Autotools

2021-01-25 Thread Nick Bowler
On 2021-01-25, Zack Weinberg  wrote:
> I'm not at all familiar with Automake's internals, but the reason I
> suggested taking advantage of GNU make extensions was the potential
> for _complexity_ reduction of the generated Makefile, not performance.
> For instance, this generated rule from one of my other projects [...]

To be honest if Automake-generated Makefile.in files only worked
for users with, say, sufficiently modern versions of GNU Make, I'm
not sure there would be any point in using Automake.

GNU make is expressive enough to implement pretty much every useful
feature of Automake directly as makefile rules.  So for sure, it might
be useful to have an includable snippet or something that makes it
easier to correctly implement the various targets specified by the GNU
Coding Standards, and it might be useful to keep things like the
"compile" and "install-sh" scripts presently bundled with Automake, but
I think the result would no longer really be Automake.

The whole reason I use Automake is because implementing conceptually
simple things like per-target CFLAGS is a pain in the butt without make
features such as target-specific variables.  Automatic dependency
generation is a pain without "-include".  Maintaining separate lists of
object and source files is a pain.  Suffix rules are limited in their
expressiveness.

All of that pain goes away if a package can depend on GNU make.
Why would anyone bother with Automake then?  In most cases it'd be
easier, simpler and more flexible to just write compilation rules by
hand, and I think the more involved cases would be better handled by
copy+paste boilerplate examples or includable snippets than with a
discrete build tool like Automake.

When I can assume every user is going to be using GNU make (unreleased
stuff mostly), I never bother with Automake.  As GNU Emacs was mentioned
as a package requiring GNU make, I notice that they do not appear to use
Automake either.

> Automake _does_ make heavy use of shell constructs embedded inside
> frequently-executed rules, for instance
>
> .c.o:
> $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\
> $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\
> $(am__mv) $$depbase.Tpo $$depbase.Po
>
> which looks like it could become
>
> %.o: %.c
> $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(@D)/$(DEPDIR)/$(*F).Tpo \
> -c -o $@ $<
> $(AM_V_at)$(am__mv) $(@D)/$(DEPDIR)/$(*F).Tpo $(@D)/$(DEPDIR)/$(*F).Po
>
> and enable Make to bypass the shell altogether.  Might be worth
> benchmarking on a big program.  Has to be an executable, not a
> library, though; for libraries, the overhead of the libtool script is
> going to dominate.

I would like to mention that, if a change like this is a perfomance
win, it is almost certainly possible to get the performance benefit
without sacrificing correct operation on other makes.

In particular, the $(@D), $(@F), and $(*F) variables are specified by
POSIX and are widely portable.  The only portability problem I am aware
of is this one related to the D-suffixed variables, involving a rather
obscure make implementation, and is often not a significant problem in
practice (for example, writing ./$(@D) instead is typically sufficient
to avoid problems due to this issue).

  % : >baz.c
  % cat >Makefile <<'EOF'
foo.o: foo/bar.c baz.o
@echo target=$@ directory=$(@D) file=$(@F)
foo/bar.c:
@echo target=$@ directory=$(@D) file=$(@F)
.c.o:
@echo stem=$* directory=$(*D) file=$(*F)
EOF
  % gmake --version
  GNU Make 4.3
  Built for x86_64-pc-linux-gnu
  Copyright (C) 1988-2020 Free Software Foundation, Inc.
  [...]

  % gmake
  target=foo/bar.c directory=foo file=bar.c
  stem=baz directory=. file=baz
  target=foo.o directory=. file=foo.o

  % dmake -V
  dmake - Version 4.12 (x86_64-pc-linux-gnu)
  Copyright (c) 1990,...,1997 by WTI Corp.

  % dmake
  target=foo/bar.c directory=foo/ file=bar.c
  stem=baz directory= file=baz
  target=foo.o directory= file=foo.o

Cheers,
  Nick



Re: Autoconf/Automake is not using version from AC_INIT

2021-01-24 Thread Nick Bowler
On 2021-01-24, Peter Johansson  wrote:
> I've managed to reproduce the behavior Bob describes in the attached
> script. If we touch the timestamp of configure.ac, running autoconf will
> update the timestamp of configure. But if the autoconf is triggered by
> something else for example if ChangeLog has been touched, then autoconf
> won't touch configure. I suppose that behavior of autoconf is too
> established to be changed, but I think making --force default would be
> more intuitive.
>
> One solution is to put the content of the AC_INIT arguments into
> dedicated files and add this file to  CONFIGURE_DEPENDENCIES, which will
> reduce the risk for this to happen but it's still possible that this
> file becomes newer than configure and one is back to autoconf being
> triggered at every 'make'.

Another possible solution: instead of using CONFIGURE_DEPENDENCIES, you can
pass any filename you like as the first argument of m4_include.  This will
get picked up by both Automake and Autoconf to trigger regeneration of
configure.

You don't need to _actually_ include the file as that may have unwanted
side effects.  The M4 traces that drive this behaviour only care about
the names of macros, not their effects.  So something like:

  m4_pushdef([m4_include])
  m4_include([ChangeLog])
  m4_popdef([m4_include])

should cause autoconf to update configure whenever whenever ChangeLog is
touched (and Automake should automatically pick it up to trigger rebuilds).

Cheers,
  Nick



Re: Future plans for Autotools

2021-01-22 Thread Nick Bowler
As always, thanks for all your effort Zack!

I wanted to share some of my thoughts on Autoconf and friends.  Maybe I
wrote too much.

For me the most important requirement of the GNU build system is that
it must be as straightforward as possible for novice users to build free
software packages from source code, with or without local modification.

This is what empowers users with the benefits of free software.  If
users are unable to build or modify the software that they use, they
are unable to take advantage of those benefits.

For me, every other consideration is secondary.

The interface consistency prescribed by the GNU coding standards goes
a long way: you learn the steps for one package and can apply that
knowledge to almost any other package.

The trend towards requiring everyone to build from VCS snapshots
and requiring zillions of specific versions of various build tools
is concerning.  Unfortunately I think many developers don't really
care about the user experience when it comes to building their software
releases from source.

This brings me to another important strength of the GNU Build System: if
I prepare a package today I want to be confident that people will still
be able to build it 5, 10, 20 or more years from now.

Now obviously we can't predict the future but we can look to past
experience: just today, I unpacked GNU Bison 1.25 (ca. 1996) on a modern
GNU/Linux system, running on a processor architecture and distribution
that didn't even exist back then, and it builds *out of the box*.

Typical issues encountered with old GNU packages are usually very minor
if you have any problems at all.  For a more complex example, I tried
building glib-1.2.10 (ca. 2001).  I had to update config.sub/config.guess
to the latest, set CC='gcc -std=gnu89' (because the code does not work with
C99 inline) and edit one line of code to disable use of an obsolete GNU C
extension (both compilation problems are due to not following the Autoconf
philosophy and using version checks instead of feature checks, oops!)

My general experience with CMake is that you probably can't build any
old packages because whatever version of CMake you have available simply
doesn't understand the package's build scripts, and the version which
could understand them just doesn't work on your system because you have
a newer processor or something.

I don't have enough experience with Meson to say.  Mainstream free
software packages have only very recently started using it.  On the
GNU side, glib-2.60 (ca. 2019) converted to meson and I am able to
build it.  If possible, I will have to try again in 2039.  I bet the
autoconf-based glib-1.2.10 tarball from 2001 will still mostly work,
and so will the 1996 version of GNU Bison.

Cheers,
  Nick



Re: Building a cross-compiler

2021-01-20 Thread Nick Bowler
On 2021-01-20, Sébastien Hinderer  wrote:
> I am in charge of making cross-compilation possible for the OCaml
> language, given that the compiler's build system uses autoconf. The
> compiler is written in OCaml itself and has a runtime written in C.
>
> To start experimenting, I am trying to build a Linux to Windows64(MINGW)
> cross-compiler. So the compiler's build and host system types are Linux
> 64 and the target type is Windows64(MinGW).
[...]
> ./configure \
>   --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu \
>   --target=x86_64-w64-mingw32 \
>   CC=x86_64-w64-mingw32-gcc
>
> But that does not quite work because, since build and host system types
> are equal, autoconf assumes we are not in cross-compiling mode and thus
> tries to run the test programs it compiles, which is actually not
> possible.

One thing that might help you is to know that you can just add
cross_compiling=yes to the configure command line to force
configure into cross compilation mode.

If you only need the C compiler to produce code for the target system
(and never for the host system) then this might even be sufficient for
your use case.

Unfortunately Autoconf does not directly support detection of multiple
C compilers (e.g., to get a different compiler for each of the build,
host and target systems).  I think the Autoconf Archive has something
to get a compiler for the build system which might be adaptable for
target as well.

Cheers,
  Nick



Re: Expansion of @libexecdir@ in .desktop.in file includes ${exec_prefix}

2021-01-18 Thread Nick Bowler
On 2021-01-19, Eli Schwartz  wrote:
> On 1/18/21 11:24 PM, Nick Bowler wrote:
>> This is the only way to make your package follow the GNU coding
>> standards, which says users must be able to override these variables
>> on the make command line.  For example:
>>
>>% ./configure
>>% make install prefix=/some/where
>>
>> is supposed to work.  So to make that happen, the rule of thumb is only
>> reference the installation variables in your makefiles.
>
> And here I was under the impression those variables are intentionally
> not expanded during AC_CONFIG_FILES, so that e.g. pkg-config files
> (which do support parsing variables defined earlier in the file) could
> be configured .pc.in -> .pc and have ./configure --prefix='/usr'
> --libdir='${prefix}/lib' actually insert '${prefix}/lib' into the .pc
> file rather than '/usr/lib'.

The gory details for these varables, including rationale for why they
work the way they do and also notes on how to use them correctly, is
all in the Autoconf manual[1].

This includes the note:

  "... you should not use these variables except in makefiles."

and

  "... you should not rely on AC_CONFIG_FILES to replace bindir and
   friends in your shell scripts and other files; instead, let make
   manage their replacement.

along with examples of make rules to do it properly.

> Certainly, it's traditional to create these files via configure, not
> make, but modifying the install prefix during "make install" would
> produce rather incorrect output there, whether you've run "make" or not.
>
> (Yes, GNU projects which presumably must follow the GNU coding
> standards, nevertheless distribute pkg-config files like this.)

GNU packages are not immune to mistakes.

> It's rather doubtful you'd be able to rely on rebuilding built objects
> that embed those options on the compiler command line either, if you run
> make && make prefix=/something/else install.

If you build with one prefix and then install with another, a package
complying with the GNU coding standards will not rebuild anything when
installing.  It would generally take extra work to get such rebuilding
to happen anyway.

[1] 
https://www.gnu.org/software/autoconf/manual/autoconf.html#Installation-Directory-Variables

Cheers,
  Nick



Re: Expansion of @libexecdir@ in .desktop.in file includes ${exec_prefix}

2021-01-18 Thread Nick Bowler
On 2021-01-18, Stefan Koch  wrote:
> The line:
> Exec=@libexecdir@/usbauth-notifier/usbauth-notifier
>
> from
> https://github.com/kochstefan/usbauth-all/blob/master/usbauth-notifier/data/usbauth-notifier.desktop.in
>
> will expanded to:
> Exec=${exec_prefix}/libexec/usbauth-notifier/usbauth-notifier
>
> But desktop-Files doesn't allow variables.
>
> Do you have an idea how to get the line expanded to:
> Exec=/usr/libexec/usbauth-notifier/usbauth-notifier
>
> without the ${exec_prefix} variable?

The normal way to do this is to perform the necessary substitutions in
make rules, as in make rules you can use make variables which will be
expanded correctly.

Alternately you could generate the entire file from a make rule which
might be reasonable for a small file like this.

This is the only way to make your package follow the GNU coding
standards, which says users must be able to override these variables
on the make command line.  For example:

  % ./configure
  % make install prefix=/some/where

is supposed to work.  So to make that happen, the rule of thumb is only
reference the installation variables in your makefiles.

Hope that helps,
  Nick



Re: realpath fails on MacOS 11.1 (big sur), with bugfix

2021-01-16 Thread Nick Bowler
On 2021-01-16, Michael Labbé  wrote:
> realpath() is failing to compile for me on MacOS 11.1 on an Apple M1 Mac.  I
> encountered this when building GNU Global Tags from the most recent source
> archive at global-6.6.5.tar.gz
> .
>
> This is due to an error because realpath cannot be found. The fix is to
> modify configure to include stdlib.h.
>
> Line 14291 of configure on global-6.6.5:
>
> main(){ (void)realpath("/./tmp", (void *)0); return 0; }
> Insert one line before that:
>
> #include
> main(){ (void)realpath("/./tmp", (void *)0); return 0; }
> configure now succeeds.
>
> The Brew folks and the Global Tags folks told me to report the bug here.

This code does not come from Autoconf, so I'm unsure why you were asked
to report it here.

The code comes from the global package itself (in their configure.ac file):

  AC_RUN_IFELSE([AC_LANG_SOURCE([[
  main(){ (void)realpath("/./tmp", (void *)0); return 0; }
  ]])],[ac_cv_posix1_2008_realpath=yes],
  [ac_cv_posix1_2008_realpath=no])

This is the where the problem needs to be corrected.

Cheers,
  Nick



Re: Getting srcdir in script executed using m4_esyscmd in AC_INIT

2021-01-05 Thread Nick Bowler
On 2021-01-05, Bob Friesenhahn  wrote:
> Something I found which surprised me is that Automake has added a GNU
> COPYING file to my non-GNU non-GPL package.  Once the file appeared in
> the source directory, it seems that it continued to be used and
> included in spite of the Automake options provided.

Yes, this particular behaviour of Automake is very annoying.  There is
a fairly long list of filenames that have nothing to do with Automake,
which matching files get automatically included in the distribution
tarballs whenever they are present in your workspace.

If Automake is run in foreign mode it shouldn't be creating COPYING
or INSTALL though.  I assume Automake pulls the AM_INIT_AUTOMAKE
arguments from m4 traces, so perhaps while converting configure.ac
something got messed up and it was accidentally run in the default
GNU mode, where it can and will create these files with --add-missing.

Cheers,
  Nick



Re: determine base type of a typedef

2020-10-23 Thread Nick Bowler
On 2020-10-23, Nick Bowler  wrote:
> On 23/10/2020, Paul Eggert  wrote:
>> On 10/22/20 6:09 PM, Russell Shaw wrote:
>>> else if(sizeof(time_t) == sizeof(long int)) {
>>
>> This is not the right kind of test. You want to test whether time_t and
>> int
>> are
>> the same types, not whether they're the same size. To do that, you should
>> use
>> code like this:
>>
>> extern time_t foo;
>> extern long int foo;
>>
>> Of course this means you'll need to compile N programs rather than
>> one, but that's life in the big Autoconf city.
>
> To improve configure performance when N is more than one or two,
> you can use C11 _Generic and AC_COMPUTE_INT to pretty easily and
> quickly determine which type (out of a finite list of candidates)
> time_t or any other type is compatible with.
>
> But you'd need a fallback (probably by compiling one program
> like the one shown abovce for each type) to handle the case
> where _Generic is not supported by the implementation.
>
> Example (totally untested):
>
>   AC_COMPUTE_INT([timetype],
> [_Generic((time_t)0, long long: 3, default: 0)
>   + _Generic((time_t)0, long: 2, default: 0)
>   + _Generic((time_t)0, int: 1, default: 0)],

On review, it is obvious this list of types could be more succinctly
written with a single _Generic as they are all for sure different:

  _Generic((time_t)0, long long: 3, long: 2, int: 1, default: 0)

But care must be taken if any of the generic cases are themselves
typedefs, (for example, if we wanted to determine whether POSIX
ssize_t is compatible with a list of types that includes ptrdiff_t),
as it is an error to include compatible types among the list of
cases:

  /* error if ptrdiff_t happens to be compatible with long long */
  _Generic((ssize_t)0, long long: 2, ptrdiff_t: 1, default: 0)

Nesting avoids this problem better than adding as I did originally:

  _Generic((ssize_t)0, long long: 2, default:
_Generic((ssize_t)0, ptrdiff_t: 1, default: 0))

as only one "branch" can be matched.

> [#include ],
> [... slow fallback computation goes here])
>
>   AS_CASE([$timetype],
> [3], [... action when time_t is compatible with long long],
> [2], [... action when time_t is compatible with long],
> [1], [... action when time_t is compatible with int],
> [... action when time_t's compatibility is undetermined])

Cheers,
  Nick



Re: determine base type of a typedef

2020-10-23 Thread Nick Bowler
On 23/10/2020, Paul Eggert  wrote:
> On 10/22/20 6:09 PM, Russell Shaw wrote:
>> else if(sizeof(time_t) == sizeof(long int)) {
>
> This is not the right kind of test. You want to test whether time_t and int
> are
> the same types, not whether they're the same size. To do that, you should
> use
> code like this:
>
> extern time_t foo;
> extern long int foo;
>
> Of course this means you'll need to compile N programs rather than
> one, but that's life in the big Autoconf city.

To improve configure performance when N is more than one or two,
you can use C11 _Generic and AC_COMPUTE_INT to pretty easily and
quickly determine which type (out of a finite list of candidates)
time_t or any other type is compatible with.

But you'd need a fallback (probably by compiling one program
like the one shown abovce for each type) to handle the case
where _Generic is not supported by the implementation.

Example (totally untested):

  AC_COMPUTE_INT([timetype],
[_Generic((time_t)0, long long: 3, default: 0)
  + _Generic((time_t)0, long: 2, default: 0)
  + _Generic((time_t)0, int: 1, default: 0)],
[#include ],
[... slow fallback computation goes here])

  AS_CASE([$timetype],
[3], [... action when time_t is compatible with long long],
[2], [... action when time_t is compatible with long],
[1], [... action when time_t is compatible with int],
[... action when time_t's compatibility is undetermined])

Cheers,
  Nick



Re: AC_PACKAGE_VERSION visibility slightly changed in autoconf-2.69c. Bug or feature?

2020-10-22 Thread Nick Bowler
On 2020-10-22, Zack Weinberg  wrote:
> I acknowledge that requiring double-quotation of AC_INIT arguments
> when they contain characters significant to M4 _should_ work; however,
> it did not work in my tests (which were not exactly the same as the
> above; see the "AC_INIT with unusual version strings" test case in
> tests/base.m4, on the branch).  Also, it increases the compat hit
> we're taking, since e.g.
>
> AC_INIT(GNU MP, GMP_VERSION, [gmp-b...@gmplib.org, see
> https://gmplib.org/manual/Reporting-Bugs.html], gmp)
>
> which also worked with 2.69, will now be considered invalid,

If this works in 2.69 I don't see why this snippet would be rendered
invalid if AC_INIT did not over/underquote, because ...

> Would you care to propose a complete patch to be applied on top of
> zack/ac-init-quoting?  In addition to "reverting hunks" you would need
> to make sure that AC_PACKAGE_* are always treated consistently within
> lib/autoconf/*.m4, fix the testsuite by adding double quotation to AC_INIT
> arguments where necessary, and document in both doc/autoconf.texi and NEWS
> the changed requirements for AC_INIT arguments.

... I am not suggesting we change any behaviour to AC_INIT arguments wrt.
quoting, as compared to Autoconf 2.69.  As far as I know this version
dutifully follows typical m4 quoting conventions, I am not aware of
any specific under/overquotation in existing releases.

This underquotation (2.69c) and overquotation (zack/ac-init-quoting
branch) is a behaviour change compared to 2.69.  I am proposing we NOT
change the amount of quoting, but rather we should stick with normal m4
conventions, which would avoid all the AC_INIT-related regressions I've
seen reported so far to this list.

Anyway, I should have some time on the weekend, I'll see what I can do
about proposing a proper patch :)

Cheers,
  Nick



Re: AC_PACKAGE_VERSION visibility slightly changed in autoconf-2.69c. Bug or feature?

2020-10-22 Thread Nick Bowler
On 22/10/2020, Zack Weinberg  wrote:
> On Thu, Oct 22, 2020 at 11:53 AM Nick Bowler  wrote:
>> On 2020-10-22, Zack Weinberg  wrote:
>> > On Wed, Oct 21, 2020 at 10:25 PM Paul Eggert 
>> > wrote:
>> >>
>> >> On 10/21/20 6:15 AM, Zack Weinberg wrote:
>> >> > We*could*  add a special case in AC_INIT where, if any of the third,
>> >> > fourth, or fifth arguments contain the literal strings
>> >> > `AC_PACKAGE_NAME` or `AC_PACKAGE_VERSION`, those are replaced with
>> >> > the
>> >> > values of the first and second argument, respectively.  This would
>> >> > keep the GHC code working as-is.  I'm not sure whether that's a good
>> >> > idea; cc:ing Paul and Eric for their thoughts.
>> >>
>> >> I'm not following all the details here
>> >
>> > The concrete problem is that, without the hack I described, we cannot
>> > support both
>> >
>> > AC_INIT([foo], [1.0], [foo-...@foo.org], [foo-AC_PACKAGE_VERSION])
>> >
>> > and
>> >
>> > AC_INIT([bar], [1.0], [foo-bug@[192.0.2.1]])
>>
>> I think this is missing the point.  The m4 way is that such an
>> email address should simply be double quoted to avoid the unwanted
>> m4 expansion, for example:
>>
>>   AC_INIT([bar], [1.0], [[foo-bug@[192.0.2.1]]])
>
> I tried that and it doesn't work.  No amount of extra quotation (ok, I
> only went up to four levels before I gave up) will prevent the square
> brackets from being lost, if I don't have autoconf use m4_defn to set
> the value of the shell variable PACKAGE_BUGREPORT.

It works perfectly fine for me with Autoconf-2.69...

  % cat >configure.ac <<'EOF'
AC_INIT([bar], [1.0], [[foo-bug@[192.0.2.1]]])

AS_ECHO(["AC_PACKAGE_BUGREPORT"])
AS_ECHO(["$PACKAGE_BUGREPORT"])

AC_OUTPUT
EOF

  % autoconf-2.69
  % ./configure
  2.69
  foo-bug@[192.0.2.1]
  foo-bug@[192.0.2.1]
  configure: creating ./config.status

And it also works as expected with the zack/ac-init-quoting branch if I
simply revert the patch hunks identified earlier in this thread:

  % autoconf-zack-patched
  % ./configure
  2.69c.10-6487-dirty
  foo-bug@[192.0.2.1]
  foo-bug@[192.0.2.1]
  configure: creating ./config.status

If the hunks are not reverted, quotation problems are readily apparent:

  % autoconf-zack-unpatched
  2.69c.10-6487
  foo-bug@[192.0.2.1]
  [foo-bug@[192.0.2.1]]
  configure: creating ./config.status

(those patch hunks are not the only instances of overquotation added by the
patch, I see that the patch also overquotes the bugreport address in the
configure --help text)

Cheers,
  Nick



Re: AC_PACKAGE_VERSION visibility slightly changed in autoconf-2.69c. Bug or feature?

2020-10-22 Thread Nick Bowler
On 2020-10-22, Zack Weinberg  wrote:
> On Wed, Oct 21, 2020 at 10:25 PM Paul Eggert  wrote:
>>
>> On 10/21/20 6:15 AM, Zack Weinberg wrote:
>> > We*could*  add a special case in AC_INIT where, if any of the third,
>> > fourth, or fifth arguments contain the literal strings
>> > `AC_PACKAGE_NAME` or `AC_PACKAGE_VERSION`, those are replaced with the
>> > values of the first and second argument, respectively.  This would
>> > keep the GHC code working as-is.  I'm not sure whether that's a good
>> > idea; cc:ing Paul and Eric for their thoughts.
>>
>> I'm not following all the details here
>
> The concrete problem is that, without the hack I described, we cannot
> support both
>
> AC_INIT([foo], [1.0], [foo-...@foo.org], [foo-AC_PACKAGE_VERSION])
>
> and
>
> AC_INIT([bar], [1.0], [foo-bug@[192.0.2.1]])

I think this is missing the point.  The m4 way is that such an
email address should simply be double quoted to avoid the unwanted
m4 expansion, for example:

  AC_INIT([bar], [1.0], [[foo-bug@[192.0.2.1]]])

This works already, as expected, in existing versions of Autoconf.

But if your package actually used such an email address today, it will
be broken by the patch due to the overquotation in AC_INIT.  To avoid
regressions like the one reported, and to be consistent with how most
macros are expected to function, we should simply not overquote in
the definition of AC_INIT.

Cheers,
  Nick



Re: AC_PACKAGE_VERSION visibility slightly changed in autoconf-2.69c. Bug or feature?

2020-10-21 Thread Nick Bowler
On 2020-10-21, Zack Weinberg  wrote:
> On Tue, Oct 20, 2020 at 4:57 PM Nick Bowler  wrote:
>> Note: the change you report is introduced by Zack's fix for related
>> AC_INIT quoting regressions.  This patch is not included in 2.69c (or
>> even on git master), but does seem to be applied by the Gentoo package.
>
> Yeah, this is a "can't have it both ways" kind of thing.  We can
> reliably round-trip "unusual" characters (like the ones that appear
> all the time in URLs and email addresses) through AC_INIT's arguments,
> or we can expand macros in those arguments even when they're quoted on
> input; I don't think there's any way to do both.

M4 macros (including AC_INIT) should normally follow the m4 quoting rule
of thumb, which is that the amount of quotation should exactly equal the
depth of macro expansion.  Remembering that extra quotation added by
macros such as m4_defn and m4_dquote count for this.

Previously AC_INIT had not enough quotation, i.e., less levels of
quotation than expansion, which lead to unexpected behaviour.

Now, with the patch, AC_INIT is adding more levels of quotation than
expansion, leading to different unexpected behaviour.

M4 macros are happiest when the level of quotation is just right :)

> This only works by accident in 2.69, incidentally.  AC_PACKAGE_VERSION
> is defined *after* AC_PACKAGE_TARNAME (see _AC_INIT_PACKAGE, lines
> 235-261 of $prefix/share/autoconf/general.m4) so both old and new
> autoconf set AC_PACKAGE_TARNAME to the literal string
> "ghc-AC_PACKAGE_VERSION".

While I agree it's probably a bit "naughty" to use AC_PACKAGE_VERSION
in the argument to AC_INIT it is a red herring.  Use of any macro would
have the exact same problem.

I'd expect double-quoted arguments to AC_INIT to be similarly broken
with this patch while previously they would work as expected.

> The value undergoes an extra round of expansion when it's used to set
> the shell variable PACKAGE_TARNAME (lines 416-428 of the same file).
> This extra round of expansion is undesirable in general.

I don't think I agree, when macro expansion is undesired the normal way
is to double-quote the arguments, which properly suppresses expansion
when macro definitions follow quoting the rule of thumb.

Cheers,
  Nick



Re: AC_PACKAGE_VERSION visibility slightly changed in autoconf-2.69c. Bug or feature?

2020-10-20 Thread Nick Bowler
Hi,

On 2020-10-20, Sergei Trofimovich  wrote:
> Initial bug is reported as autoconf failure on ghc-8.8.4:
> https://bugs.gentoo.org/750191
> There autconf 2.69 works, 2.69c does not.

Note: the change you report is introduced by Zack's fix for related
AC_INIT quoting regressions.  This patch is not included in 2.69c (or
even on git master), but does seem to be applied by the Gentoo package.

The 2.69c release version seems to handle the example fine.

> Here is the minimal example:
>
> OK:
>
>   $ cat configure.ac
>   AC_INIT([The Glorious Glasgow Haskell Compilation System], [9.1.0],
> [glasgow-haskell-b...@haskell.org], [ghc-AC_PACKAGE_VERSION])
>
>   echo "$PACKAGE_VERSION"
>
>   AC_OUTPUT
>   $ autoconf-2.69
>   $ ./configure
>   9.1.0
>   configure: creating ./config.status
>
> BAD:
>
>   $ autoconf-2.70_beta2
>   configure.ac:1: error: possibly undefined macro: AC_PACKAGE_VERSION
>   If this token and others are legitimate, please use m4_pattern_allow.
>   See the Autoconf documentation.

Yes I think now Zack's underquotation fixes have added the opposite
problem.  There is now too much quotation so the tarname (and other
arguments) are not fully expanded when used.

At least these changes should probably be simply dropped from the patch
or at least they perhaps need more consideration...

@@ -436,18 +427,12 @@ AC_SUBST([SHELL])dnl
 AC_SUBST([PATH_SEPARATOR])dnl

 # Identity of this package.
-AC_SUBST([PACKAGE_NAME],
-[m4_ifdef([AC_PACKAGE_NAME],  ['AC_PACKAGE_NAME'])])dnl
-AC_SUBST([PACKAGE_TARNAME],
-[m4_ifdef([AC_PACKAGE_TARNAME],   ['AC_PACKAGE_TARNAME'])])dnl
-AC_SUBST([PACKAGE_VERSION],
-[m4_ifdef([AC_PACKAGE_VERSION],   ['AC_PACKAGE_VERSION'])])dnl
-AC_SUBST([PACKAGE_STRING],
-[m4_ifdef([AC_PACKAGE_STRING],['AC_PACKAGE_STRING'])])dnl
-AC_SUBST([PACKAGE_BUGREPORT],
-[m4_ifdef([AC_PACKAGE_BUGREPORT], ['AC_PACKAGE_BUGREPORT'])])dnl
-AC_SUBST([PACKAGE_URL],
-[m4_ifdef([AC_PACKAGE_URL],   ['AC_PACKAGE_URL'])])dnl
+AC_SUBST([PACKAGE_NAME],  ['m4_defn([AC_PACKAGE_NAME])'])dnl
+AC_SUBST([PACKAGE_TARNAME],   ['m4_defn([AC_PACKAGE_TARNAME])'])dnl
+AC_SUBST([PACKAGE_VERSION],   ['m4_defn([AC_PACKAGE_VERSION])'])dnl
+AC_SUBST([PACKAGE_STRING],['m4_defn([AC_PACKAGE_STRING])'])dnl
+AC_SUBST([PACKAGE_BUGREPORT], ['m4_defn([AC_PACKAGE_BUGREPORT])'])dnl
+AC_SUBST([PACKAGE_URL],   ['m4_defn([AC_PACKAGE_URL])'])dnl

 m4_divert_pop([DEFAULTS])dnl
 m4_wrap_lifo([m4_divert_text([DEFAULTS],
@@ -1099,9 +1084,8 @@ Fine tuning of the installation directories:
   --infodir=DIR   info documentation [DATAROOTDIR/info]
   --localedir=DIR locale-dependent data [DATAROOTDIR/locale]
   --mandir=DIRman documentation [DATAROOTDIR/man]
-]AS_HELP_STRING([--docdir=DIR],
-  [documentation root ]@<:@DATAROOTDIR/doc/m4_ifset([AC_PACKAGE_TARNAME],
-[AC_PACKAGE_TARNAME], [PACKAGE])@:>@)[
+  --docdir=DIRdocumentation root @<:@DATAROOTDIR/doc/]dnl
+m4_default_quoted(m4_defn([AC_PACKAGE_TARNAME]), [PACKAGE])[@:>@
   --htmldir=DIR   html documentation [DOCDIR]
   --dvidir=DIRdvi documentation [DOCDIR]
   --pdfdir=DIRpdf documentation [DOCDIR]

If you drop those two hunks from Zack's patch the example should work again.

Cheers,
  Nick



Re: Weird failure with autoconf 2.69c in gmp

2020-10-14 Thread Nick Bowler
On 2020-10-14, Ross Burton  wrote:
> No it's the # in the URL. Simply removing #libidn2 fixes this problem.
>
> Presumably some quoting problem which just needs more precision []?
[...]
> On Wed, 14 Oct 2020 at 09:26, Ross Burton  wrote:
>>
>> Similar in libidn2:
>>
>> | m4:configure.ac:16: Warning: excess arguments to builtin `m4_define'
>> ignored
>>
>>  16 AC_INIT([libidn2], [2.3.0], [help-lib...@gnu.org],,
>>  17   [https://www.gnu.org/software/libidn/#libidn2])

Yes, very similar quoting bug introduced by the same commit, but a bit
more subtle this time:

  m4_ifndef([AC_PACKAGE_URL],
  [m4_define([AC_PACKAGE_URL],
 m4_default(m4_defn([_ac_init_URL]),
[m4_if(m4_index(m4_defn([_ac_init_NAME]),
[GNU ]), [0],
  [[https://www.gnu.org/software/]m4_defn([AC_PACKAGE_TARNAME])[/]])]))])

If _ac_init_URL is nonempty it is insufficiently quoted in the expansion
of m4_default; since the expansion introduces a comment (because of the #)
that comment will eat some of the parentheses in this snippet and the
resulting parse is very much not right.

I probably would write this snippet something like:

  m4_define([AC_PACKAGE_URL],
m4_ifnblank(m4_defn([_ac_init_URL]),
  [m4_defn([_ac_init_URL])], ...))

Cheers,
  Nick



Re: Weird failure with autoconf 2.69c in gmp

2020-10-13 Thread Nick Bowler
On 13/10/2020, Ross Burton  wrote:
> Hi,
>
> Using autoconf 2.69c (upgrading from 2.69b meant we could drop two
> patches, so that's good news!) to build gmp fails in a rather
> mysterious way:
[...]
> | m4:configure.ac:40: Warning: excess arguments to builtin `m4_define'
> ignored
> | autom4te: error: m4 failed with exit status: 1
> | aclocal: error: echo failed with exit status: 1
> | autoreconf: error: aclocal failed with exit status: 1
>
> Line 40 is:
>
> AC_INIT(GNU MP, GMP_VERSION, [gmp-b...@gmplib.org, see
> https://gmplib.org/manual/Reporting-Bugs.html], gmp)
>
> Has anyone seen this, or similar, before?

This appears to be caused by a quoting bug in _AC_INIT_PACKAGE (expanded
by AC_INIT) which was recently introduced by commit 6a0c0239449a ("Trim
whitespace from arguments of AC_INIT").

It would seem that the comma from the 3rd argument ends up getting
passed unquoted to m4_define which results in this error.

This code in lib/autoconf/general.m4:

  m4_ifndef([AC_PACKAGE_BUGREPORT],
  [m4_define([AC_PACKAGE_BUGREPORT], _ac_init_BUGREPORT)])

should probably be changed to:

  m4_ifndef([AC_PACKAGE_BUGREPORT],
  [m4_define([AC_PACKAGE_BUGREPORT], m4_defn([_ac_init_BUGREPORT]))])

Cheers,
  Nick



Re: [sr #110318] autoreconf: support libtoolize being named glibtoolize

2020-09-25 Thread Nick Bowler
On 2020-04-29, Zack Weinberg  wrote:
> 2.61 was a long time ago.  I'm wondering if Gentoo still ships
> 'glibtoolize' and not 'libtoolize'.  If it doesn't, this change is
> only weakly motivated.

Today at least, Gentoo certainly does not install a "glibtoolize" nor
does it install a patched autoreconf that looks for this name.

I don't recall such a thing ever being the case but of course I could be
misremembering.  But a quick search of the historical repository turns
up no obvious evidence of such patches.

Cheers,
  Nick



Re: [PATCH v2] Ensure standard file descriptors are open on start

2020-08-28 Thread Nick Bowler
On 2020-08-28, Paul Eggert  wrote:
> On 8/28/20 6:52 AM, Zack Weinberg wrote:
>> I think that for 2.70 we should make fd 0 read-only and 1,2
>> write-only here, and revisit this afterward -- when we're not in a
>> release freeze we can think about things like turning on set -e mode.
>
> Sounds good.
>
> In the longer term I doubt whether set -e is the way to go. I recall some
> old shells mishandling it (e.g., with 'set -e' the command 'A || B' would
> cause the shell to exit when A failed). And I suspect the use of 'set -e' to
> detect shell errors is problematic even today. 'set -e' is intended more as
> a debugging aid than as a programming facility.

Even if the shell implements POSIX requirements with no bugs, The semantics
for 'set -e' get so weird when complex commands are involved and that
weirdness propagates down into shell functions in very unintuitive ways.

Whether or not 'set -e' has any effect at all in a shell function body
depends on the specific shell syntax used at the point of the function
call!

It certainly should not be enabled at the top level of configure scripts
by Autoconf (or any nontrivial shell script, really) as that will just
lead everyone to madness.

Cheers,
  Nick



Re: Exactly when does autoconf enter cross-compilation mode?

2020-08-23 Thread Nick Bowler
On 2020-08-23, wf...@niif.hu  wrote:
> Nick Bowler  writes:
>> On 2020-08-22, wf...@niif.hu  wrote:
>>
>>> https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Hosts-and-Cross_002dCompilation.html
[...]
> The documentation linked above is confusing because it describes the
> 2.13 behavior, the 2.50 behavior and the "backward compatibility
> scheme" together.  For example it starts with "the chain of default is
> now simply: target defaults to host, host to build, and build to the
> result of config.guess", then later "when --host is specified but
> --build isn't, the build system is assumed to be the same as --host",
> which is a contradiction, although "eventually, this historically
> incorrect behavior will go away." So has this eventuality happened
> already?

I think either the statement about the build system type being set
to the --host value is just wrong or that the eventuality must have
happened almost immediately (certainly build_alias does not appear
to get set to anything at all if --build is unspecified, contrary to
what the manual suggests).

The behaviour today is essentially unchanged since the Autoconf 2.50
release almost 2 decades ago.

It's quite possible you are the first person to read this section of
the manual in all that time and actually say "hey wait a minute..."

>> Specifying --host alone *may* select cross compilation based on
>> heuristic (whether the compiler's output can be executed).  As the
>> manual explains this is fragile and is provided for compatibility
>> with historical behaviour.
>
> It also mentions that "by the time the compiler test is performed, it
> may be too late to modify the build-system type".  What would modify the
> build-system type anyway?  Isn't this only about whether to enter
> cross-compilation mode?

I think this is what is meant, since the heuristic is run as part of the
compiler detection and any tests run before that in the configure script
cannot know whether or not the user is cross compiling.  When --build is
specified then the cross compilation state does not depend on heuristics
and is valid for the entire configure script execution.

This may or may not be a real problem in practice.  It's hard to imagine
a scenario where you would care about the cross compilation state before
running the compiler tests.  Everything works just fine most of the time
if you only specify --host.

False positives are a real problem with the heuristic though.  E.g., if
a system can run the compiled binaries but we are actually cross-building
for a different system with different characteristics the heuristic may
determine the user is not cross compiling and then runtime tests can give
wrong results later.

> By the way: does "cross-compilation mode" equal that configure
> "doesn't run any tests that require execution", or is this only a
> single implication of it?

"Cannot run compiled programs" is the primary implication of cross
compilation so yes, this primarily affects the behaviour of AC_RUN_IFELSE
and configure tests which depend on it.

(Any other code you are using can check the cross compilation mode:
it is available to configure scripts in the cross_compiling variable).

>> By specifying both --host and --build and then cross compilation mode
>> is enabled whenever they are different.  This is the preferred method.
>
> I see.  But what do I do in a true multiarch setup, for example when I
> compile for i386 on an x86_64 build system, which can transparently run
> the i386 binaries, and thus I don't want to throw away the tests
> requiring execution?  Such setups are becoming more and more common.

If you are building i386 binaries fully intending for them to be
executed on the same machine which is building them then this is not
cross compiling.

You could either set --host and --build to the same i386-whatever value,
which should work fine, or just specify --host alone and configure
should correctly determine that you are not cross compiling.

Cheers,
  Nick



Re: Exactly when does autoconf enter cross-compilation mode?

2020-08-22 Thread Nick Bowler
On 2020-08-22, wf...@niif.hu  wrote:
> https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Hosts-and-Cross_002dCompilation.html
> is rather hard to follow in general, but also contains the following
> clear-cut statement: "Now, configure enters cross-compilation mode if
> and only if --host is passed."
>
> However, if I pass in my build architecture Autoconf 2.69 reports:
>
> $ ./configure --host=x86_64-linux-gnu

I imagine that the authors meant to write "only if", rather than
"if and only if" because indeed the statement as written is not true
(the manual immediately goes on to explain the actual behaviour).

Specifying --host alone *may* select cross compilation based on
heuristic (whether the compiler's output can be executed).  As the
manual explains this is fragile and is provided for compatibility
with historical behaviour.

By specifying both --host and --build and then cross compilation mode
is enabled whenever they are different.  This is the preferred method.

Cheers,
  Nick



Re: [sr #110286] Make it possible to request a specific (non-latest) version of a language standard

2020-07-27 Thread Nick Bowler
On 2020-07-27, Zack Weinberg  wrote:
> URL:
>   
>
>  Summary: Make it possible to request a specific
> (non-latest)
> version of a language standard
>  Project: Autoconf
> Submitted by: zackw
> Submitted on: Mon 27 Jul 2020 06:31:37 PM UTC
> Category: None
> Priority: 5 - Normal
> Severity: 1 - Wish
>   Status: None
>  Privacy: Public
>  Assigned to: None
> Originator Email:
>  Open/Closed: Open
>  Discussion Lock: Any
> Operating System: None
>
> ___
>
> Details:
>
> Feedback on the 2.69b beta indicates that users find the new behavior of
> AC_PROG_CC and AC_PROG_CXX, automatically selecting the latest supported
> language standard, problematic.  Quoting
> https://lists.gnu.org/archive/html/autoconf/2020-07/msg00010.html :
>
>> One issue we [PostgreSQL] would like to point out is that
>> the new scheme of automatically checking for the latest
>> version of the C and C++ standards (deprecating AC_PROG_CC_C99
>> etc.) is problematic...
>>
>> [W]e have set C99 as the project standard. So checking for
>> C11 is (a) useless, and (b) bad because we don't want
>> developers to accidentally make use of C11 features and have
>> the compiler accept them silently.

I have no comments on the C++ side of things but on the C side this
request doesn't make too much sense.

As a portability tool, the goal of Autoconf and configure scripts is
to find a compiler that can successfully compile the application.

Aside from the removal of gets from the standard library (which most
C11 compilers still implement anyway) and that some language features
which were mandatory in C99 are now optional (which again, most C11
compilers implement anyway), C11 is essentially completely backwards
compatible with C99 so such a compiler should be perfectly fine for
building a C99 codebase.

AC_PROG_CC_C99 has never guaranteed that a C99 compiler will be selected,
it is "best effort" only and has always accepted C11 compilers provided
they support VLAs.  GCC has defaulted to C11 mode for the better part
of a decade now (since version 5) and AC_PROC_CC_C99 will not add any
options.

All compilers I'm aware of that support both C99 and C11 modes will
silently accept most if not all C11 features even when C99 mode is
selected by the user.  So having Autoconf select a C99 mode is probably
not going to do anything to help projects avoid modern language
features.  For the same reason it could be difficult to write a feature
test program which usefully determines that the compiler is in C99 mode
(probably the best you can do is test for specific values of the
__STDC_VERSION__ macro which is really not the Autoconf way).

If you don't want specific language constructs to be used in your
codebase the proper way to do this is with linting tools or warning
flags during development, not with portability tools like Autoconf.

For example with GCC if you want to reject new C11 syntax you can
build with the -Werror=c99-c11-compat option.  (You could even use
Autoconf to write a configure test that will automatically add this
option to CFLAGS if it is supported by the compiler.

Cheers,
  Nick



Re: [sr #110271] libSDL2 fails with autoconf 2.70

2020-07-16 Thread Nick Bowler
On 2020-07-16, Ross Burton  wrote:
[...]]
> libSDL2 fails to configure with autoconf 2.70 but works with 2.69:
>
> | checking for size_t... yes
> | checking for M_PI in math.h... ../SDL2-2.0.12/configure: line 13202: CPP:
> command not found
> | checking how to run the C preprocessor... gcc  -E
> | ../SDL2-2.0.12/configure: line 13328: ac_fn_c_try_cpp: command not found
> | ../SDL2-2.0.12/configure: line 13328: ac_fn_c_try_cpp: command not found
> | configure: error: in
> `/home/pokybuild/yocto-worker/qemux86-64/build/build/tmp/work/x86_64-linux/libsdl2-native/2.0.12-r0/build':
> | configure: error: C preprocessor "gcc  -E" fails sanity check
> | See `config.log' for more details
> | WARNING: exit code 1 from a shell command.
>
> The configure.ac is just a very badly named macro:
>
> AC_CHECK_DEFINE(M_PI, math.h)
>
> Which is implemented here:
>
> https://github.com/spurious/SDL-mirror/blob/master/acinclude/ac_check_define.m4
>
> This uses AC_EGREP_CPP but it doesn't appear to explicitly look for a cpp.
> My workaround is to explicltly call AC_PROG_CPP earlier: should AC_EGREP_CPP
> be doing this, or should this requirement be documented?

This is a classic M4 quoting bug.  The macro should quote the third
argument to AC_CACHE_CHECK (and really, should quote all the arguments).

The problem is that diversions do not work properly during arugment
collection so AC_REQUIRE (which depends on diversions) and many other
macros will not work properly if it is expanded during argument
collection.

Quote the arguments to AC_CACHE_CHECK and the prerequisite macros will
get expanded in the correct order.

(also WTF is with that stray AC_DEFINE outside of the macro definition?)

Cheers,
  Nick



Re: Using @bindir@ etc. in C headers

2020-06-05 Thread Nick Bowler
On 05/06/2020, Florian Weimer  wrote:
> * Nick Bowler:
>
>>> It would like to get config.status expansion going, among other things.
>>> It's nice to consolidate these things in a single place, and avoid
>>> scattering such constructs and several places.
>>>
>>> What do you think about this?
>>
>> Autoconf is to be used as part of build systems that comply with the GNU
>> coding standards and these standards say that the user can set prefix
>> etc.
>> on either the make command line or the configure command line.
>>
>> In other words, the following should produce a working installation
>> (assuming a clean build):
>>
>>   ./configure
>>   make prefix=/some/where install
>>
>> Substituting installation directories into C source files at configure
>> time is probably not going to work in this case.  This is why the manual
>> recommends using make rules to do it.
>
> For the install target, the prefix= setting should not trigger
> recompilation and thus does not affect what gets baked into binaries.
> Instead, it should only affect installed paths.  The config.status
> approach makes it more likely that this happens.

Hence why I stated the assumption of a clean build where the "install"
target will also build the package, since I didn't want to worry about
this detail in a simple example.

> I can't find a reference that you should be able to specify prefix= on
> the make command line for a non-install target.

GCS 7.2.5 "Variables for Installation Directories"[1]

  Installers are expected to override these values when calling
  make (e.g., make prefix=/usr install) or configure (e.g., configure
  --prefix=/usr).

[1] https://www.gnu.org/prep/standards/standards.html#Directory-Variables

Cheers,
  Nick



Re: Using @bindir@ etc. in C headers

2020-06-05 Thread Nick Bowler
On 05/06/2020, Florian Weimer  wrote:
> * Michael Orlitzky:
>
>> On 6/5/20 6:57 AM, Florian Weimer wrote:
>>> I would like to define macros containing the standard paths, like this:
>>>
>>> #define BINDIR "@bindir@"
>>>
>>> It does not work due to this code in lib/autoconf/general.m4 (which
>>> appears to be predate DESTDIR support):
>>>
>>> ...
>>>
>>> Is there are generally approved way to work around this?  The manual
>>> tells us to use -D preprocessor arguments, but I'd prefer the
>>> explicitness of defining the macros via a header file.
>>
>> The autoconf manual's "Installation Directory Variables" section says
>> the following...
>>
>>   Similarly, you should not rely on AC_CONFIG_FILES to replace datadir
>>   and friends in your shell scripts and other files; instead, let make
>>   manage their replacement. For instance Autoconf ships templates of its
>>   shell scripts ending with `.in', and uses a makefile snippet similar
>>   to the following to build scripts like autoheader and autom4te:
>>
>>  edit = sed \
>>  -e 's|@datadir[@]|$(pkgdatadir)|g' \
>>  -e 's|@prefix[@]|$(prefix)|g'
>>
>>  autoheader autom4te: Makefile
>>  rm -f $@ $@.tmp
>>  $(edit) '$(srcdir)/$@.in' >$@.tmp
>>  chmod +x $@.tmp
>>  chmod a-w $@.tmp
>>  mv $@.tmp $@
>>
>>  autoheader: $(srcdir)/autoheader.in
>>  autom4te: $(srcdir)/autom4te.in
>>
>> Not very aesthetically pleasing, but it gets the job done.
>
> It would like to get config.status expansion going, among other things.
> It's nice to consolidate these things in a single place, and avoid
> scattering such constructs and several places.
>
> What do you think about this?

Autoconf is to be used as part of build systems that comply with the GNU
coding standards and these standards say that the user can set prefix etc.
on either the make command line or the configure command line.

In other words, the following should produce a working installation
(assuming a clean build):

  ./configure
  make prefix=/some/where install

Substituting installation directories into C source files at configure
time is probably not going to work in this case.  This is why the manual
recommends using make rules to do it.

Cheers,
  Nick



Re: [PATCH] up_to_date_p: treat equal mtime as outdated.

2020-04-14 Thread Nick Bowler
On 2020-04-13, Paul Eggert  wrote:
> I just checked, and GNU Make uses high-resolution file timestamps when
> available, and considers a file to be up-to-date if it has exactly the same
> timestamp as its dependency. I suspect that this is because Makefile rules
> like
> this:
>
> a: b
>   cp -p b a
>
> would otherwise cause needless work if one ran 'make; make'.

If the Autoconf manual's portability notes are to be believed, then
such a rule is really not a good idea because on filesystems suporting
subsecond timestamp precision some "cp -p" implementations can create
the destination file with a strictly older timestamp than the source.

It's worth pointing out that even on filesystems supporting high-
precision timestamps, actual mtime updates usually have significantly
lower resolution (typically on the order of a millisecond or so).
Sufficiently fast make rules can frequently create many files with
exactly the same mtime.  Therefore considering equal timestamps to be
"up-to-date" is really the only reasonable option.

In cases where it is required to create two files with distinct mtimes,
the only real portable option I've found is to touch one of the files
in a loop and compare timestamps until they are different.

Cheers,
  Nick



Re: [sr #110215] AC_EGREP_HEADER appears to be broken in master

2020-03-24 Thread Nick Bowler
On 2020-03-24, Ross Burton  wrote:
> As to why this is not broken with 2.69, I think I have a theory.
>
> If I build e.g. acl with both 2.69 and master, it's notable that 2.69
> has these lines in the output that do not exist in master:
>
>> checking how to run the C preprocessor... gcc  -E
>> checking for grep that handles long lines and -e...
>> /scratch/poky/hosttools/grep
>> checking for egrep... /scratch/poky/hosttools/grep -E
>> checking for ANSI C header files... yes

Probably that is it: the long-obsolete AC_HEADER_STDC, previously
used internally by AC_INCLUDES_DEFAULT, used AC_EGREP_HEADER.  The
AC_HEADER_STDC macro is now a no-op (and is not used at all within
Autoconf anymore), so that change is likely what made the first use
of AC_EGREP_HEADER the one inside the if condition, causing the
observed results.

> Something else was causing the egrep search to happen early in the
> build.  My hunch is that the same codepath exists in apt, and now
> isn't expanding egrep earlier in the configure run.
>
> Thanks for the explanation, I'll switch out if for AS_IF and move on.

Sounds great.  And thanks for all your effort in testing and fixing all
these packages.

Cheers,
  Nick



  1   2   3   >