Re: `cp -l` doesn't work correctly on some macOS versions

2024-02-02 Thread Russ Allbery
Werner LEMBERG  writes:

> I'm shocked to read in

>   
> https://apple.stackexchange.com/questions/464706/why-do-i-get-bad-file-descriptor-when-copying-using-hardlink-flag

> that `cp -l` fails for recent macOS versions – a LilyPond user just
> confirmed that, too...

> What do you recommend to test for and/or to circumvent the issue?

I thought ln   || ln -s   was the standard
recipe here (in other words, fall back on symbolic links).  One has to be
very careful with how the paths are specified, though, because 
has a different meaning for ln -s than for ln.

(Note that even on modern systems, hard links are only reliably usable
within the same directory.  The example in the Stack Exchange question is
hard-linking to a different directory, which I would expect to fail in
lots of situations, such as on AFS file systems.)

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: [PATCH 2/2] Ignore failure of setting mode on a temporary file on OS/2

2023-10-14 Thread Russ Allbery
"Zack Weinberg"  writes:

> Really what we need here is for File::Temp to allow us to supply the
> third argument to the open() system call.  That feature was added in
> version 0.2310 of the File::Temp module.  Does anyone have time right
> now to do the archaeology on what version of File::Temp shipped with the
> oldest version of Perl we currently support?

The standard Perl command corelist -a  tells you the versions of
that module that shipped with each version of Perl.  The first version of
Perl that included a version of File::Temp later than 0.2310 was Perl
5.33.3 (which included 0.2311).

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: AC_SYS_LARGEFILE

2023-09-11 Thread Russ Allbery
Nick Bowler  writes:

> Looking at the code, CC is modified only if the -n32 option is needed to
> enable large-file support.  The comments suggest this is required on
> IRIX.  If large-file support can be enabled by preprocessor macros
> (which I imagine is the case on all current systems), AC_DEFINE is used.

> It has been this way since the macro was originally added to Autoconf.
> I can only speculate as to why the original author used CC, but the
> reason is probably so that you can just take an existing package and
> just add AC_SYS_LARGEFILE with no other modifications and it will almost
> certainly work without any major problems.

Back in the day when such flags were common (thankfully largely behind us
at this point), it was standard practice to put architecture selection
flags like -n32 into CC, not CFLAGS or CPPFLAGS.  That's because such
flags are really of a different type than CFLAGS or CPPFLAGS, more akin to
invoking a different compiler for a different target architecture than the
normal uses of CFLAGS and CPPFLAGS.

I suspect that the aswer to the original question is "don't worry about
it, just use AC_SYS_LARGEFILE, because no system you will build on will
need the CC modification anyway."

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: INSTALL nits

2023-08-18 Thread Russ Allbery
"Zack Weinberg"  writes:

> What are the most common things people need to do in a "bootstrap"
> script that *aren't* covered by `autoreconf -i`?

I'm not sure if this is the correct thing for me to do, but for years I've
used the bootstrap script to do any pre-generation of files that I ship
with the distribution tarball to reduce the required build dependencies.
Running autoreconf -i is an important subset of that, but it also
includes, for example, pre-generating man pages from POD source and
generating input data for the test suite that relies on tools the typical
user may not have locally.

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: Conditional AC_CHECK_HEADER

2023-02-04 Thread Russ Allbery
Florian Weimer  writes:

> I want to submit this upstream.  Is there are some explanation somewhere
> why it's not permitted to invoke AC_CHECK_HEADER under shell conditional
> statement?  I couldn't find it in the manual.

It's documented under AS_IF:

https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.71/html_node/Common-Shell-Constructs.html#index-AS_005fIF-1

The principle that one should always use an AS_* construct if one exists
rather than regular shell constructs could probably be documented more
aggressively.

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: On time64 and Large File Support

2022-11-12 Thread Russ Allbery
Wookey  writes:

> Now, I'm not yet sure if just having autoconf 2.72 will actually break
> things. AIUI, these changes only apply where LFS
> (-D_FILE_OFFSET_BITS=64) is turned on, so in Debian at least, where that
> is not the default on 32bit arches, maybe this is OK. But probably quite
> a lot of packages already enable LFS so they are suddenly going to get a
> new ABI if they expose timet anywhere?
> https://codesearch.debian.net/search?q=AC_SYS_LARGEFILE=1 shows
> 163 pages of hits, and a quick peruse suggsts that AC_SYS_LARGEFILE is
> used by a lot of packages (as you might expect - this transition has
> been going on for many years). And just having that macro in
> configure.(in|ac) will turn 64-bit timet on if you autoreconf with
> 2.72. Right?

If indeed pre-existing use of AC_SYS_LARGEFILES would suddenly enable
64-bit time_t on autoreconf, I can name two packages just off the top of
my head that this change to Autoconf will immediately break if their
Debian packages are rebuilt with a newer version of Autoconf, creating
severe bugs.

libremctl will have its ABI changed without any coordination or versioning
(which I will be doing, moving forward, but have not started tackling yet
in part because I was waiting to see what the plan would be and whether
there will be some coordinated change to SONAMEs, a new architecture, or
what).  And INN, which admittedly is a disaster about things like this for
lots of historical reasons, will have its *on-disk file format* changed
without notice in a way that will cause serious failure and possibly data
corruption on upgrades.

This is just wildly backward-incompatible and seems like an awful idea.
If we're going to throw a big switch and rebuild everything, it needs to
be done at a distro-wide level.  I believe the only safe thing for
Autoconf to do is to provide an opt-in facility, similar to what was done
for AC_SYS_LARGEFILE, and then leave deciding whether to opt in to
higher-level machinery.

> However my limited understanding as of right now says that autoconf 2.72
> tying 64bit time_t to use of AC_SYS_LARGEFILE means that 2.72 can't be
> used in debian yet. So I currently favour not tying them together in
> this release.

That's also my understanding from the thread so far, although I'm not sure
that I'm following all of the subtleties.

> People have been using AC_SYS_LARGEFILE without 64bit time_t for many
> years now so it's not yet clear to me why that cannot continue.

And these are conceptually not at all the same thing.  I saw Paul's
explanation for why he views them as fundamentally the same because of
their effect on system calls like stat, but I certainly don't think of
them that way and I am quite dubious many other people will either.  The
set of things that I have to check to ensure that time_t is handled
correctly is totally different than the set of things I thought about when
enabling AC_SYS_LARGEFILE many years in the past.

I recognize that there will be overlap once file timestamps are past 2038
and that will happen sooner than anyone plans for, but it's still true
that this has *not* happened right now and this therefore is not currently
creating many bugs, whereas this switch in this way will create many, very
serious bugs immediately.

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: config.sub/config.guess using nonportable $(...) substitutions

2021-03-09 Thread Russ Allbery
Warren Young  writes:

> Since all versions of Solaris postdate this, Sun really should have made
> /bin/sh a POSIX shell from the start, but for whatever reason, did not,
> and now that decision's causing us problems.

My recollection is that there was concern at the time with portability of
shell scripts written for SunOS.  Sun chose to keep /bin/sh (and various
other tools) compatible with SunOS rather than compatible with POSIX and
introduced the /usr/xpg4 path for those who wanted POSIX behavior.  That
trade-off of immediate compatibility for future pain has been causing
future pain ever since.

Interestingly, at around the same time they made that decision, they also
dropped bcopy and bzero from the standard library, provoking a ton of
porting problems.  (They reversed that decision later and added them
back.)

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: AC_PROG_CC: how to distinguish clnag from gcc?

2021-02-08 Thread Russ Allbery
Sébastien Hinderer  writes:

> Did you notice important changes in the supported warnings between
> different versions of the same compiler?

Yes, mostly that new version of the compiler add new warnings, and
sometimes new warning flags.  I probe for whether the compiler supports
each flag during Autoconf, and periodically I review the release notes of
the two compilers and add new interesting warning flags.

> Are there many warnings in common between (at least one version of)
> clang and (at least one version of) gcc, actually?

I think there is some overlap between the flags (-W -Wall are the same,
for instance), although I didn't investigate in detail because I started
down the -Weverything with a blacklist path with Clang early on and GCC
doesn't (so far as I know) support that.

For the record, apparently the Clang folks don't really want people to use
-Weverything.  I still do anyway, for maintainer builds, and have not had
any real problems with that (and have been grateful to pick up new
warnings automatically), but it shouldn't be a default.

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: AC_PROG_CC: how to distinguish clang from gcc?

2021-02-05 Thread Russ Allbery
"David A. Wheeler"  writes:

> The gcc & clang groups coordinate with each other; they try to provide
> the same flags & API for the same functionality, and occasionally copy
> from each other.

This is not my experience.  In particular, the warning flags for Clang are
significantly different than the warning flags for GCC, giving rise to
logic like this to configure warnings used for maintainer builds:

 AS_IF([test x"$CLANG" = xyes],
[WARNINGS_CFLAGS="-Werror"
 m4_foreach_w([flag],
[-Weverything -Wno-cast-qual -Wno-disabled-macro-expansion -Wno-padded
 -Wno-sign-conversion -Wno-reserved-id-macro
 -Wno-tautological-pointer-compare -Wno-undef -Wno-unreachable-code
 -Wno-unreachable-code-return -Wno-unused-macros
 -Wno-used-but-marked-unused],
[RRA_PROG_CC_FLAG(flag,
[WARNINGS_CFLAGS="${WARNINGS_CFLAGS} flag"])])],
[WARNINGS_CFLAGS="-g -O2 -D_FORTIFY_SOURCE=2 -Werror"
 m4_foreach_w([flag],
[-fstrict-overflow -fstrict-aliasing -Wall -Wextra -Wformat=2
 -Wformat-overflow=2 -Wformat-signedness -Wformat-truncation=2
 -Wnull-dereference -Winit-self -Wswitch-enum -Wstrict-overflow=5
 -Wmissing-format-attribute -Walloc-zero -Wduplicated-branches
 -Wduplicated-cond -Wtrampolines -Wfloat-equal
 -Wdeclaration-after-statement -Wshadow -Wpointer-arith
 -Wbad-function-cast -Wcast-align -Wwrite-strings -Wconversion
 -Wno-sign-conversion -Wdate-time -Wjump-misses-init -Wlogical-op
 -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes
 -Wmissing-declarations -Wnormalized=nfc -Wpacked -Wredundant-decls
 -Wrestrict -Wnested-externs -Winline -Wvla],
[RRA_PROG_CC_FLAG(flag,
        [WARNINGS_CFLAGS="${WARNINGS_CFLAGS} flag"])])])

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: AC_PROG_CC: how to distinguish clnag from gcc?

2021-02-05 Thread Russ Allbery
Sébastien Hinderer  writes:

> It seems AC_PROG_CC wrongly believes clang is gcc and that may cause
> problems when clang is passed a warning which is only supposrted by gcc,
> as is the case e.g. for -Wno-stringop-truncation.

> Is there a recommended way to determine for sure from a configure script
> whether the detected C compiler is clang or gcc?

You can test each individual flag, but I found that tedious and irritating
because I wanted to write a list of warning flags for Clang based on its
manual and a list of warning flags for GCC based on its manual.

Rather than try to detect GCC, I reversed the logic and tried to detect
Clang instead, which seems to be somewhat easier.

dnl Source used by RRA_PROG_CC_CLANG.
AC_DEFUN([_RRA_PROG_CC_CLANG_SOURCE], [[
#if ! __clang__
#error
#endif
]])

AC_DEFUN([RRA_PROG_CC_CLANG],
[AC_CACHE_CHECK([if the compiler is Clang], [rra_cv_prog_cc_clang],
[AC_COMPILE_IFELSE([AC_LANG_SOURCE([_RRA_PROG_CC_CLANG_SOURCE])],
[rra_cv_prog_cc_clang=yes],
[rra_cv_prog_cc_clang=no])])
 AS_IF([test x"$rra_cv_prog_cc_clang" = xyes], [CLANG=yes])])

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: autoconf-2.69c released [beta]

2020-09-29 Thread Russ Allbery
Bob Friesenhahn  writes:

> Gavin, thanks very much for your help.  Just in case someone reads this
> discussion later, I found that there was a small syntax error in the
> above.  The following is what finally worked for me (a small change after
> ac_cv_have_C__func__='no'):

> # Test for C compiler __func__ support
> if test "$ac_cv_have_C__func__" != 'yes' ; then
> AC_CACHE_CHECK(for C compiler __func__ support, ac_cv_have_C__func__,
> [AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]],
> [[const char *func=__func__;
> return (func != 0 ? 0 : 1);
> ]])],
> [ac_cv_have_C__func__='yes'],
> [ac_cv_have_C__func__='no'])])

> if test "$ac_cv_have_C__func__" = 'yes' ; then
>  AC_DEFINE(HAS_C__func__,1,Define if C compiler supports __func__)
> fi
> fi

This is separate from the question of how Autoconf should handle old
configure scripts and how autoupdate should work, but while you're
manually making changes to macros anyway, you will probably be happier in
the long run if you quote all arguments and make a habit of using AS_IF
instead of open-coded shell if statements.

# Test for C compiler __func__ support
AS_IF([test "$ac_cv_have_C__func__" != 'yes'], [
AC_CACHE_CHECK([for C compiler __func__ support], [ac_cv_have_C__func__],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]],
[[const char *func=__func__;
return (func != 0 ? 0 : 1);
]])],
[ac_cv_have_C__func__='yes'],
[ac_cv_have_C__func__='no'])])

AS_IF([test "$ac_cv_have_C__func__" = 'yes'], [
 AC_DEFINE([HAS_C__func__],[1],[Define if C compiler supports __func__])
])
])

This will protect against a lot of edge cases.

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: Makefile.am target name prefix *.o

2020-02-01 Thread Russ Allbery
Vincent Blondel  writes:

> Encounter a strange behaviour with autoconf.
> I do not understand why all the compiled *.o files are prefixed with
> target-xxx.o.

> Below an example ... have no progname yet for my executable hence let's
> call it main for now ...

> bin_PROGRAMS = main
> main_SOURCES =  obj1.cc obj2.cc

> Executable is OK but have no clue why I have something like this at the end
> ? ...

> src/main-obj1.o
> src/main-obj2.o

This is actually Automake rather than Autoconf (Makefile.am is Automake).
This renaming is documented in the Automake manual:

https://www.gnu.org/software/automake/manual/html_node/Renamed-Objects.html

-- 
Russ Allbery (ea...@eyrie.org) <https://www.eyrie.org/~eagle/>



Re: sharedlocalstate default - does anybody use this?

2018-06-06 Thread Russ Allbery
"Mariano, Adrian V."  writes:

> So what's the correct practice for my package?  Should I use
> sharedstatedir and not worry about it resolving to a strange default
> path because mostly it will get changed by site defaults?

Interestingly, Debian's standard tool for this doesn't override
sharedstatedir, so this will create a surprising directory the first time
the maintainer builds it (and then they'll investigate and presumably fix
it by passing in the right configure flag).  That also probably indicates
that it's very rarely used.  I think most people use localstatedir for
this sort of file, since the distinction between sharedstatedir and
localstatedir only matters for models of software installation that are
now almost extinct.

That said, I agree that it's still perfectly fine to use it.  The lack of
an override is a bug in debhelper, and units may be the prompt for us to
fix it.  :)  (Debian will almost certainly just point sharedstatedir at
the same directory as localstatedir.)

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: How to check for SIGABRT?

2017-12-08 Thread Russ Allbery
Simon Sobisch <simonsobi...@web.de> writes:

> We use the autoconf generated testsuite script in GnuCOBOL to test the
> compiler and runtime - and it works very well for "normal" tests.

> There are some tests where the compiler should abort and it does, but
> when it does so "correctly" by raising SIGABRT we can check for return
> code 134 but get an additional stderr message similar to
> "/full/path/to/testsuite.de/testcasenumber/run aborted on line 40" (and
> I don't know if SIGABRT will result in return code 134 on all "exotic"
> systems).

> The following options come to mind:

> * use `trap` in AT_CHECK to catch the error if it is the expected result
> --> should fix the additional stderr, we still would have to check for
> return code 134

> * use some builtin expectation similar to XFAIL(true) - but I don't
> found anything like this in the docs

> * when running the testsuite (we have one case where we check this via
> an environment variable set in atlocal and return 77 for skipping the
> test if the compiler cannot use an external tool) don't raise SIGABRT
> but something like `exit 96`

> Do you have any experience/thoughts about this?

Personally, I'd make the actual test a tiny C program that execs argv,
waits for it to exit, and then inspects the resulting exit status to see
if it died with SIGABRT.

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Starting /bin/sh in output configure file

2017-07-13 Thread Russ Allbery
R0b0t1 <r03...@gmail.com> writes:

> A hardcoded binary path isn't portable, the correct solution is to use
> `env sh`. Typically this is seen as:

> #!/usr/bin/env sh

> Which technically causes the same problem Mr. Akhlagi was
> experiencing, but on most desktop Unixes the location of `env` is more
> predictable than the location of various interpreters.

I am extremely dubious that the location of env is more predictable than
/bin/sh.  python or perl or whatnot, yes, but not /bin/sh, which is
probably the most hard-coded path in UNIX.

But more to the point for this specific request, it would surprise me a
good deal if Android would move sh and not also move env to some other
directory than /usr/bin.  If you're going to break compatibility that
much, it's unlikely you're going to keep compatibility for env.  But,
well, I've been surprised before

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>



Re: Proper location to install shell function libraries?

2017-03-01 Thread Russ Allbery
Ralf Corsepius <rc040...@freenet.de> writes:

> $libdir// (e.g. %libdir/) is the playground a package can
> install more or less whatever it wants, comprising executables.

> As your "scripts" don't seem to be "programs", $libdir/ probably
> is what you are looking for.

$datadir/, no?  Script libraries are almost certainly
architecture-independent.

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_DEFINE_UNQUOTED does not expand $libexecdir

2016-04-03 Thread Russ Allbery
"Richard B. Kreckel" <krec...@in.terlu.de> writes:

> Instead, it leaves it unexpeded as ${exec_prefix}/libexec, which is
> unusable within C code. This has been explained very well here:
> http://stackoverflow.com/questions/8264827/pass-directory-to-c-application-from-compiler

> The workaround is cumbersome: it requires us to fiddle with CPPFLAGS in
> the Makefile. Seriously, this can't be a feature.

It's also explained very well in the Autoconf manual.  I'm not sure anyone
is horribly happy with how this works and with the number of people who
are surprised by it, but it's difficult to find a different way to
implement the requirements of the GNU Coding Standards in this area.

|Most of these variables have values that rely on 'prefix' or
| 'exec_prefix'.  It is deliberate that the directory output variables
| keep them unexpanded: typically '@datarootdir@' is replaced by
| '${prefix}/share', not '/usr/local/share', and '@datadir@' is replaced
| by '${datarootdir}'.
| 
|This behavior is mandated by the GNU Coding Standards, so that when
| the user runs:
| 
| 'make'
|  she can still specify a different prefix from the one specified to
|  'configure', in which case, if needed, the package should hard code
|  dependencies corresponding to the make-specified prefix.
| 
| 'make install'
|  she can specify a different installation location, in which case
|  the package _must_ still depend on the location which was compiled
|  in (i.e., never recompile when 'make install' is run).  This is an
|  extremely important feature, as many people may decide to install
|  all the files of a package grouped together, and then install links
|  from the final locations to there.
| 
|In order to support these features, it is essential that
| 'datarootdir' remains defined as '${prefix}/share', so that its value
| can be expanded based on the current value of 'prefix'.
| 
|A corollary is that you should not use these variables except in
| makefiles.  For instance, instead of trying to evaluate 'datadir' in
| 'configure' and hard-coding it in makefiles using e.g.,
| 'AC_DEFINE_UNQUOTED([DATADIR], ["$datadir"], [Data directory.])', you
| should add '-DDATADIR='$(datadir)'' to your makefile's definition of
| 'CPPFLAGS' ('AM_CPPFLAGS' if you are also using Automake).
| 
|Similarly, you should not rely on 'AC_CONFIG_FILES' to replace
| 'bindir' and friends in your shell scripts and other files; instead, let
| 'make' manage their replacement.

There are more details in this section ("Installation Directory
Variables") about how to write Makefile rules to do this substitution.

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>



Re: Add clang++ to AC_PROG_CXX

2016-03-31 Thread Russ Allbery
Ruben Safir <ru...@mrbrklyn.com> writes:
>> On 3/16/2016 4:02 AM, Václav Haisman wrote:

>>> Cool. I do not remember exactly if this was my motivation for the
>>> original submission but I believe this is still relevant for Cygwin
>>> where you can AFAIK install Clang and not install GCC (which creates
>>> the /usr/bin/cc to gcc). Without the patch, no compiler will be found
>>> even though Clang is present. This patch fixes the situation.

> That is not a bug, it was a desired feature...

I'm not sure who you are or what your role in Autoconf development is, or
why you feel empowered to speak for the project and its goals, but
Autoconf has always supported *proprietary* compilers (let alone free
software compilers that use BSD-style unrestrictive licenses).

This comes directly from its pragmatic role as a tool to help people
deploy free software on their local systems, even if those systems are
mostly not free software.  Many, many people have installed Autoconf-built
software on proprietary operating systems as their first step in using
free software, and have gone on to use more and more free software and
even free operating systems.  I'm one of those people.  Without the
ability to install free software on proprietary systems with proprietary
tools, those people might not have ever considered free software.

Given the decline in proprietary UNIX platforms, this goal may not be as
common today, but I think it clearly still exists.  While proprietary UNIX
platforms are not as common on servers (largely due to the amazing success
of free software), Mac OS X is very widely used and plays a comparable
ecosystem role.

It might make strategic sense in some cases to decline to cooperate with
proprietary software (and maybe even free but not copyleft software,
although I'm much more dubious there) as a way of solidifying the
strategic advantage of free software and making our community stronger.
But if it does, it would be in a place where free software provides some
compelling user benefit over proprietary software that proprietary
software would like to copy.  Apart from us all, the sort of people who
enjoy reading mailing lists about build systems, the build system for a
piece of software is rarely, if ever, the piece that offers that sort of
compelling benefit.  Building a piece of software so that you can try it
is not the place where you fall in love with it; rather, it's a necessary
evil to get to the interesting bit of actually using the software.

Given that, I believe the right *ideological* role for Autoconf, in full
cooperation with the FSF's principles and ideals, is to make it as easy as
possible to get free software working on any platform, even proprietary
platforms, because that's how we get our foot in the door, and that's how
we get more people using free software.  Let's save strategic lack of
cooperation for some other area where the benefits for users are actually
compelling.

-- 
Russ Allbery (ea...@eyrie.org)  <http://www.eyrie.org/~eagle/>

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_PROG_CC wrongly setting $GCC=yes while clang is used

2014-09-08 Thread Russ Allbery
Bastien Chevreux b...@chevreux.org writes:

 Would it be worthwhile to forward this to the GNU compiler maintainers
 so that they could maybe correct their course by maybe introducing a
 define which is ‘reserved’ for telling that, yes, this is indeed a GNU
 compiler?

That's what __GNUC__ was for.  However, from the perspective of the
authors of the other compilers, this is a bug -- they want to be able to
compile code that uses GCC extensions, which is why they're implementing
those extensions.  So they *want* their compiler to be detected as capable
of supporting GCC extensions.

So, if GCC added a new define, the other compilers would just start
defining that symbol as well.

I'm afraid the only hope you have, if you depend on extensions that are
not implemented by other compilers, is to test explicitly for those
extensions.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_PROG_CC wrongly setting $GCC=yes while clang is used

2014-09-08 Thread Russ Allbery
Bastien Chevreux b...@chevreux.org writes:

 And that’s the point: as a developer of a program who uses a compiler
 just as a mean to get things done, I’m totally not interested in this …
 and I shouldn’t. It’s enough for me to know that a given compiler is
 buggy between version X and Y when a given flag is used to simply not
 use that flag there.

For this particular situation, which I think is relatively rare, I would
just parse the output of gcc -v.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_SEARCH_LIBS without LIBS

2014-05-16 Thread Russ Allbery
Peter Johansson troj...@gmail.com writes:
 On 17/05/14 01:47, NightStrike wrote:

 This is an issue if you need to check for the existence of a library,
 but only use it in a subset of all of your things being compiled.

 Is this considered a bug in AC_SEARCH_LIBS, or is there a workaround
 I'm missing?

 Occasionally I've used the pattern

 save_LIBS=$LIBS
 AC_SEARCH_LIBS([...
 LIBS=$save_LIBS

Likewise.  I do this a lot.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: configure speedup proposal: add --assume-c99 and --assume-posix2008 flags

2014-03-23 Thread Russ Allbery
John Spencer maillist-autoc...@barfooze.de writes:

 having an option like --assume-c99 could provide a shortcut so all
 checks like

 - have stdint.h
 - have snprintf()
 - etc

These are un-alike, just to mention.  A surprising number of platforms
have an snprintf function that's broken.  To test it properly, you need
something akin to the gnulib snprintf check, and it's broken in enough
places that you may not want to assume the result.

Some of the problems with snprintf are also quite serious.  For example,
Solaris 9 (which I believe is otherwise C99-conforming) would return -1
when given NULL, 0 as the first arguments to snprintf, which makes it
impossible to use snprintf safely in many programs.

See:

https://www.gnu.org/software/gnulib/manual/html_node/snprintf.html

for more information about the portability mess.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Use system extensions conditionally?

2014-03-03 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 On the other hand, BSD's funopen() is exposed in stdio.h, which IS a
 standard header; but the name funopen() is not in the list of reserved
 name patterns in POSIX.  Therefore, if BSD wants to comply with POSIX,
 then when _POSIX_C_SOURCE is defined before including stdio.h, then
 funopen() must NOT be visible to a user of stdio.h.  Now, not everyone
 cares about being strictly POSIX-compliant, so the default behavior,
 when you don't request extensions, might have names leaked into the
 environment.  But on the converse side, when you KNOW you want to use
 extensions, you are best off to explicitly request those extensions.
 And if you are going to use non-POSIX extensions, you might as well use
 all extensions on all platforms, all in one go.

Of course, one of the drawbacks is that you have an all-or-nothing
situation.  Either you have to stick to only POSIX interfaces, or you have
to give the local system carte blanche to stomp all over your namespace,
possibly resulting in code breakage down the line when some new local
extension is introduced that conflicts with your code.

This of course isn't a problem that Autoconf can solve.  It's basically
inherent in how the feature test macros work.  But it's worth being aware
of.  All namespace bets are essentially off once you enable system
extensions.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_PATH_PROGS_FEATURE_CHECK results in namespace invasion

2014-02-26 Thread Russ Allbery
Peter Rosin p...@lysator.liu.se writes:

 It's not possible to use AC_PATH_PROGS_FEATURE_CHECK with a cache
 variable that does not start with ac_cv_path_ which is bad since one
 project might check for a certain capability of tool foo, while some
 other project is interested in some completely orthogonal capability of
 that same tool foo. As written, it is very likely that both projects
 will use the cache variable ac_cv_path_FOO for orthogonal purposes,
 which is bad by design.

Wouldn't you name the variable FOO_FEATURE?  In other words, I think the
example in the Autoconf manual is not the best, but the functionality you
need is there.  I would rewrite the example as:

AC_CACHE_CHECK([for m4 that supports indir], [ac_cv_path_M4_indir],
  [AC_PATH_PROGS_FEATURE_CHECK([M4_indir], [m4 gm4],
  ...
AC_SUBST([M4], [$ac_cv_path_M4_indir])

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/



Re: AC_PATH_PROGS_FEATURE_CHECK results in namespace invasion

2014-02-26 Thread Russ Allbery
Peter Rosin p...@lysator.liu.se writes:
 On 2014-02-27 01:20, Russ Allbery wrote:

 Wouldn't you name the variable FOO_FEATURE?  In other words, I think
 the example in the Autoconf manual is not the best, but the
 functionality you need is there.  I would rewrite the example as:
 
 AC_CACHE_CHECK([for m4 that supports indir], [ac_cv_path_M4_indir],
   [AC_PATH_PROGS_FEATURE_CHECK([M4_indir], [m4 gm4],
   ...
 AC_SUBST([M4], [$ac_cv_path_M4_indir])

 That doesn't work, because then you can't shortcut the test with the
 natural M4=/some/gnu/place/m4, you'd be forced to write M4_indir=...
 instead. This is even worse than saving the ac_cv_path_FOO variable
 during the AC_PATH_PROGS_FEATURE_CHECK macro, since it would be visible
 to the end user.

Good point.  However, couldn't you just put:

: ${ac_cv_path_M4_indir:=$M4}

before the above code?

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/



Re: Passing -fno-lto by default?

2014-02-06 Thread Russ Allbery
Markus Trippelsdorf mar...@trippelsdorf.de writes:

 Unfortunately the idiom above is quite common for visibility=hidden
 testing.

Why would one not instead build the object and then use objdump on it to
look at the exported symbols?  It's still not ideally portable, but it
seems like it should be more portable than trying to grep the assembly
output.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Turn on compiler warnings by default for AC_PROG_CC, AC_PROG_CXX AC_PROG_FC

2014-01-17 Thread Russ Allbery
Zack Weinberg za...@panix.com writes:

 -ansi, however, should not be in there at all; it doesn't just turn on
 strict conformance mode, it turns on strict *C89* conformance mode,
 which is often wrong for new code.  And even nowadays, strict
 conformance mode in general tends to break system headers.

Seconded.  I have one package that I do test with -ansi, but mostly out of
personal curiosity.  Satisfying -ansi requires several contortions that
are not really helpful for real-world portability for typical free
software packages, such as limiting the length of quoted strings and
messing about with feature-test macros.  I think it's more of a
specialized flag best reserved for environments with very particular
standards-conformance requirements, or where one is targeting platforms
that may not provide any functionality over the bare ANSI minimum.

The same concerns apply to a lesser extent to --std=c99 or --std=c11.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/



Re: AC_*/AM_* macros for options

2013-10-30 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 Perhaps I'm approaching this the wrong way (I probably don't have your
 experience with the platform). When Linux/Unix folks turn off
 -Wconversion, what do they use to find the bad conversions?

The last time I turned it on for a project, the only warnings it produced
were either complaints about code that was provably fine without changes
or bogus warnings about ntohs.  I fixed the ones that were fine anyway,
just because I'm anal like that, and then turned it off because I always
use -Werror when doing development builds and am not willing to turn on
warnings I can't suppress in every circumstance.  (Sometimes I cheat and
use pragmas to turn them off for particular files, but I really hate doing
that, and I definitely don't want to do that for everything that uses
ntohs.)

Now that the ntohs problem has finally been fixed, I'll probably try
turning it back on again, since for the most part my code doesn't deal
with issues like the time_t one that Paul cited, and I have a high
tolerance for tweaking code to avoid warning false positives.  But I agree
with Paul's point in general: it's a warning that doesn't work well with
code that's dealing with corner cases, like types that could be signed on
some hosts and unsigned on others.

There are a lot of gcc warnings that are really only useful in specific
situations.

FWIW, my normal warning set is:

# A set of flags for warnings.  Add -O because gcc won't find some warnings
# without optimization turned on.  Desirable warnings that can't be turned
# on due to other problems:
#
# -Wconversion  http://bugs.debian.org/44 (htons warnings)
#
# Last checked against gcc 4.7.2 (2013-04-22).  -D_FORTIFY_SOURCE=2 enables
# warn_unused_result attribute markings on glibc functions on Linux, which
# catches a few more issues.
WARNINGS = -g -O -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wendif-labels  \
-Wformat=2 -Winit-self -Wswitch-enum -Wuninitialized -Wfloat-equal \
-Wdeclaration-after-statement -Wshadow -Wpointer-arith \
-Wbad-function-cast -Wcast-align -Wwrite-strings   \
-Wjump-misses-init -Wlogical-op -Wstrict-prototypes\
-Wold-style-definition -Wmissing-prototypes -Wnormalized=nfc   \
-Wpacked -Wredundant-decls -Wnested-externs -Winline -Wvla -Werror

For one piece of code that doesn't use Autoconf at all and is therefore
designed to be maximally portable with no probing, I add -ansi -pedantic.
I normally don't do that because it requires manually fiddling with
feature test macros and actually decreases portability in some situations.

I'm currently only using warning flags for developer builds.  I'm still
making up my mind on whether enabling them for production builds would
find more problems than it would cause.

Obviously, -Werror is almost never appropriate for production builds
unless the developers control all the hosts on which it will be built, or
unless it's only turned on optionally.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_*/AM_* macros for options

2013-10-29 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 -Wconversion should be included. That's because -1  1 after promotion:

 signed int i = -1;
 unsigned int j = 1;

 if(i  j)
 printf(-1  1 !!!\n);

 I understand its going to be a pain point for those who don't pay
 attention to details.

-Wconversion produces (or at least produced; I've not rechecked recently)
unfixable warnings for most network code due to macro expansion.  See:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=44

The pattern explained in that bug is still present in the current glibc
headers.

-- 
Russ Allbery (ea...@eyrie.org)  http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_LINK_IFELSE unexpectedly fails on Ubuntu 12.04

2013-10-25 Thread Russ Allbery
David Pointon dpoin...@uk.ibm.com writes:

 The above confirms that the freetype headers have, but interestingly not 
 the libraries have not, been found. Inspection of the build log reveals 
 the following command to have been run:

 configure:34201:  /usr/bin/g++-4.6 -o conftest -I/usr/include/freetype2 
 -I/usr/include -L/usr/lib/i386-linux-gnu -lfreetype conftest.cpp 5

The contents of $FREETYPE_LIBS are being put on the compilation command
line before the program to be compiled instead of afterwards.  This has
never been guaranteed to work.  Some linkers dynamically reorder the link
line and cause this to work, but you can't rely on that behavior, and it
stopped working in the versions of the toolchain currently in Ubuntu.

I suspect somewhere in the configure machinery for this package, something
is adding $FREETYPE_LIBS to $LDFLAGS.  This is wrong.  $LDFLAGS and $LIBS
are intentionally separate, and $LIBS flags cannot go into $LDFLAGS for
exactly this reason.

Compilers and linkers are generally more forgiving about the location of
the -L flags, so putting $LDFLAGS flags into $LIBS will usually work if
there's no way to get properly separated $LDFLAGS and $LIBS variables,
which may be the case here.

Short version: If I'm right, find where the configure script is throwing
$FREETYPE_LIBS into $LDFLAGS and change it to add those flags to $LIBS
instead.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: Empty else part in AS_IF

2013-10-10 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Another solution is to ensure that AM_CONDITIONAL is always defined
 (where its definition is a no-op if using an old automake that did not
 already define it):

 m4_define_default([AM_CONDITIONAL])
 AS_IF([test x$var != xfalse],
   [$test=1],
   [AM_CONDITIONAL([TEST], [false])])

This would reintroduce the same problem, though, wouldn't it?
AM_CONDITIONAL would expand to nothing, and then the else branch of AS_IF
would be empty.  Or does this give AS_IF enough information to figure that
out because it avoids using the lower-level m4_* function?

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Empty else part in AS_IF

2013-10-10 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Indeed.  The problem is that autoconf cannot tell if a non-empty literal
 will expand to empty text (m4_ifdef results in no output).  You'll have
 to workaround it yourself:

 AS_IF([test x$var != xfalse],
 [$test=1],
 [: m4_ifdef(...)])

But if the contents of m4_ifdef expand into shell code, doesn't this
prepend : to the first line of that shell code, effectively commenting it
out?  It seems cleaner to use [:] in the else branch of m4_ifdef for that
reason.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Empty else part in AS_IF

2013-10-10 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 If that's the case, then write AM_CONDITIONAL so that it always produces
 a shell statement:

 m4_define_default([AM_CONDITIONAL], [:])

Oh, good idea.  Yes, that's even cleaner.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_CHECK_ALIGNOF maximum ??

2013-06-20 Thread Russ Allbery
Nick Bowler nbow...@elliptictech.com writes:

 C11 also provides max_align_t, which is *probably* what you are looking
 for but obviously isn't available everywhere.  Anyway, on older
 implementations without max_align_t, the following type is probably a
 good enough substitute for it:

   union {
 char a;
 short b;
 int c;
 long d;
 long long e;
 float f;
 double g;
 long double h;
 void *i;
   }

I would add a function pointer, but yes.

 You could use AC_CHECK_TYPE to test for max_align_t, then use
 AC_CHECK_ALIGNOF on the above monster if it is not available.

 You may also want to test for long long availability before
 including it in the union...

And long double, which IIRC is more of a portability issue than long long.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [PATCH 0/2] Modernize header checks

2013-06-01 Thread Russ Allbery
Peter Rosin p...@lysator.liu.se writes:
 On 2013-06-01 00:06, Russ Allbery wrote:

 Autoconf doesn't work with MSVC directly so far as I know.  All of the
 packages I have that are ported to MSVC have a separate hand-written
 config.h that's used for MSVC builds, and in that file one simply
 doesn't define HAVE_STRINGS_H.

 What do you mean directly? MSYS can drive a build using MSVC as toolchain
 (instead of MinGW) just fine. I do it all the time.

I meant in terms of being able to probe directly for a header file using
the standard configure script, so I may be wrong.  You can currently get
checking for strings.h... not found on Windows systems from the regular
configure script?

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [PATCH 0/2] Modernize header checks

2013-06-01 Thread Russ Allbery
Peter Rosin p...@lysator.liu.se writes:
 On 2013-06-01 08:09, Russ Allbery wrote:
 Peter Rosin writes:
 On 2013-06-01 00:06, Russ Allbery wrote:

 Autoconf doesn't work with MSVC directly so far as I know.  All of
 the packages I have that are ported to MSVC have a separate
 hand-written config.h that's used for MSVC builds, and in that file
 one simply doesn't define HAVE_STRINGS_H.

 What do you mean directly? MSYS can drive a build using MSVC as
 toolchain (instead of MinGW) just fine. I do it all the time.

 I meant in terms of being able to probe directly for a header file
 using the standard configure script, so I may be wrong.  You can
 currently get checking for strings.h... not found on Windows systems
 from the regular configure script?

 That all works nicely, and libtool happily creates DLLs using MSVC etc etc.
 The build infrastructure is generally not the problem, the WIN32 api and
 the deficiencies in the POSIX jokes in libc are the much bigger problem.
 You need MSYS and you need to convert the provided vcvars.bat file to a
 shell script you can source from your MSYS bash, that's about it, and off
 you go.

In that case I think Autoconf should continue probing for strings.h rather
than just defining HAVE_STRINGS_H.  If the header check has meaningful
results on a platform on which Autoconf is currently working, there
doesn't seem to be a reason to drop it, IMO.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [PATCH 0/2] Modernize header checks

2013-05-31 Thread Russ Allbery
Zack Weinberg za...@panix.com writes:

 Second, it cleans up AC_INCLUDES_DEFAULT and all the other canned
 tests so that they don't waste time checking for ISO C90 headers,
 which are now ubiquitous (stddef.h, stdlib.h, string.h, wchar.h,
 wctype.h, locale.h, time.h) and don't use pre-standard headers that
 were replaced by C90 headers at all (memory.h and strings.h).

I *think* your patch would remove strings.h from the list of headers that
are probed by default by Autoconf, and hence remove HAVE_STRINGS_H from
the preprocessor directives set by Autoconf.

If so, note that removing strings.h from the list of headers that are
probed by default will cause backwards compatibility issues.  One still
must include strings.h (not string.h) according to POSIX in order to get
strcasecmp and friends, and some operating systems (specifically at least
some versions of FreeBSD) do actually enforce that and do not prototype
those functions in string.h.  I'm quite sure there is code out there that
assumes that Autoconf will probe for strings.h as a side effect of other
probes and set HAVE_STRINGS_H, and therefore doesn't probe for it
explicitly.  (I maintain some of it, in fact.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [PATCH 0/2] Modernize header checks

2013-05-31 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Yes, there is a bunch of code that non-portably assumes they can use
 strcasecmp or ffs without including strings.h.  On the other hand,
 strings.h is available on pretty much ALL platforms that use free
 software compilers (according to gnulib, only ancient Minix 3.1.8 and
 non-free MSVC 9 have problems with assuming strings.h exists and is
 self-contained; but mingw does not have this issue).  Thus, you
 generally don't need to use HAVE_STRINGS_H, but can just blindly include
 it, unless your package is trying to be portable to a rather unforgiving
 toolchain.

Sure, I can add the explicit probe for strings.h to my code or drop the
#ifdef depending on the portability requirements (and in fact will do so
regardless, since whether or not this change is made it's a good idea).

My point wasn't that, but rather that this change will break backward
compatibility for Autoconf users, and I wanted to make sure that people
were aware of that.  There is code that includes strings.h protected by
#ifdef HAVE_STRINGS_H which currently compiles correctly and will stop
compiling correctly after the configure script is rebuilt with this
change.

All of the other changes that Zack discussed look backward-compatible to
me except in the area of portability to entirely obsolete systems.  This
one stood out because it would break compilation of some packages on
modern, maintained FreeBSD systems because it changes current assumptions
about what the standard Autoconf header probes do.

This behavior is not currently clearly documented, but I will point out
that the current documentation of AC_INCLUDES_DEFAULT shows use of the
HAVE_STRINGS_H define without any mention of any other required checks.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [PATCH 0/2] Modernize header checks

2013-05-31 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 That said, would it hurt if autoconf just unconditionally defined the
 macros that were previously conditionally defined by a probe, so that
 code that was relying on HAVE_STRINGS_H instead of blind inclusion will
 still compile?

That would certainly resolve my concern.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [PATCH 0/2] Modernize header checks

2013-05-31 Thread Russ Allbery
Peter Rosin p...@lysator.liu.se writes:
 On 2013-05-31 19:19, Eric Blake wrote:

 That said, would it hurt if autoconf just unconditionally defined the
 macros that were previously conditionally defined by a probe, so that
 code that was relying on HAVE_STRINGS_H instead of blind inclusion will
 still compile?

 How would one do to be portable to both some versions of FreeBSD and
 MSVC, then? (MSVC 10 also lacks strings.h, btw) One camp needs
 HAVE_STRINGS_H to be defined and one needs to not have it defined.
 Sounds evil to unconditionally define it under those circumstances. Or
 have I misunderstood something?

Autoconf doesn't work with MSVC directly so far as I know.  All of the
packages I have that are ported to MSVC have a separate hand-written
config.h that's used for MSVC builds, and in that file one simply doesn't
define HAVE_STRINGS_H.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [RFC] getting rid of the config.guess/sub problem when bootstrapping new ports/systems

2013-05-15 Thread Russ Allbery
Thomas Petazzoni thomas.petazz...@free-electrons.com writes:
 On Tue, 14 May 2013 23:53:44 -0400, Mike Frysinger wrote:

 yes, Gentoo fixed this for every package in our tree like 9 years ago
 (we added a common function like 11 years ago that ebuilds could call
 manually, but we found that didn't scale).  when you run a standard
 autoconf script, we automatically search for files named config.sub
 and config.guess and replace them with the up-to-date host copy.  no
 checking or anything :).  in hindsight, that seems like a bad idea, but
 in practice, i think we have yet to find a package that this doesn't
 actually work.

 FWIW, we do the same thing in Buildroot (a tool that builds embedded
 Linux systems from source, through cross-compilation). Never had any
 problem doing so.

Debian is moving in that direction as well.  We have two different package
helper tools that do this in different ways.  As always with Debian,
though, we're not very centralized about practices, so it takes a while to
get this deployed consistently across the whole archive.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [RFC] getting rid of the config.guess/sub problem when bootstrapping new ports/systems

2013-05-15 Thread Russ Allbery
Mike Frysinger vap...@gentoo.org writes:

 if Gentoo blowing away your rinky dinky config.sub hack breaks your
 project, then take it as a sign that You're Doing It Wrong :).

I think this may be one of those historical momentum things.  As INN
maintainer, I used to carry local patches to config.guess and config.sub
to add support for platforms on which my users were trying to build INN
that weren't supported by the current config.guess and config.sub scripts,
usually because the patches had been submitted but they were very slow to
release a new version.  This problem went away completely some time back,
with a new and far faster and more responsive process for maintaining
those scripts, and now I just use the most recent released versions for
all packages.  Some projects may still be carrying unnecessary hacks
because they started carrying them back in those days and have never gone
back and revisited.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: autoconf/tools produces to many top level files

2013-04-12 Thread Russ Allbery
Bob Rossi b...@brasko.net writes:
 On Fri, Apr 12, 2013 at 06:49:59AM -0600, Eric Blake wrote:
 On 04/12/2013 05:30 AM, Bob Rossi wrote:

 I'm creating a new project and using autotools. I've done this before,
 but for some reason this time I've noticed how many files autotools
 creates. It totally pollutes the top level of my project.
 
 lib - Mine originally
 aclocal.m4

 Ask automake if that can be moved.  Autoconf does not create it.

 Wow! Thanks for this information, very helpful!

 I didn't solve aclocal.m4 yet, but..

You can tell aclocal to write the file somewhere else (with --output).
You may want to try just putting it into your AC_CONFIG_MACRO_DIR and see
if it happens to work.  Although that's going to force you to keep your
bootstrap script, since that means calling aclocal with a flag instead of
just running autoreconf.

 bootstrap

 Autotools don't create this.  That must be your doing, or else you are
 using gnulib.

 Yes, this is mine. It's my autoreconf script. Does autoreconf work fine
 these days? or is there another way to generate all the scripts?

autoreconf just works, and I would pass it --no-cache.  I always suppress
the cache and don't really notice a significant difference.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Autoreconf stops with non-POSIX variable name

2013-04-01 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 If you care about non-GNU make users, then you can't use $(shell).  And
 as long as you are going to mandate that your package be built with GNU
 make, then you might as well go all the way and document that fact in
 your README file, as well as:

 This is actually an Automake question, but the short answer is that
 you probably have fatal warnings enabled and you need to add
 -Wno-portability to the Automake flags (in AM_INIT_AUTOMAKE, for
 example).

 ...tell automake that you don't care about the non-portability aspect by
 adding -Wno-portability to your AM_INIT_AUTOMAKE, at which point you'd
 no longer need your @DOLLAR_SIGN@ hack.

Yeah, if you're going to require GNU make, just say so and add
-Wno-portability.  That's the whole point of the flag.  Hiding the dollar
sign from Automake so that it can't detect the portability issue is an odd
way of expressing things and just adds complexity to no real purpose.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Faillure to autogenerate file on hpux

2013-03-31 Thread Russ Allbery
Bastien ROUCARIES roucaries.bast...@gmail.com writes:

 Any idea to get portable basename shell command ?

Personally, I just use basename, but the Autoconf manual does say:

`basename'
 Not all hosts have a working `basename'.  You can use `expr'
 instead.

The manual didn't list an alternative, but some searching (I'm not
horribly familiar with the regex syntax in expr) says:

`expr //$file : '.*/\([^/]*\)'`

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: Autoreconf stops with non-POSIX variable name

2013-03-29 Thread Russ Allbery
oborchert borch...@nist.gov writes:

 I created a Makefile.in where I read the content out of a file and pass
 it to CFLAGS. Calling ./configure ... the Makefile will be generated an
 all works well.

 Makefile.in:
 ...
 MY_REVISION_FILE=my-revision.txt
 MY_REVISION=$(shell cat $(top_srcdir)/$(MY_REVISION_FILE))
 AM_CFLAGS = -I$(EXTRAS_INCLUDE_DIR) -I$(top_srcdir)
 -DMY_REVISION=$(MY_REVISION)
 ...

 The problem arises once I moved the Makefile.in code into Makefile.am to
 allow the auto generation of  Makefile.in. There calling autoreconf -i
 --force stops with the following error: 

 server/Makefile.am:9: cat $(top_srcdir: non-POSIX variable name
 server/Makefile.am:9: (probably a GNU make extension)
 autoreconf: automake failed with exit status: 1

 This problem hunts me now since quite some time. I searched everywhere
 but did not find anything that could help me finding a solution for
 that. In short, the only thing I need is a way to get an uninterpreted
 text such as $(shell cat $(top_srcdir)/$(MY_REVISION_FILE)) copied
 from Makefile.am to Makefile.in

This is actually an Automake question, but the short answer is that you
probably have fatal warnings enabled and you need to add -Wno-portability
to the Automake flags (in AM_INIT_AUTOMAKE, for example).

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Faillure to autogenerate file on hpux

2013-03-29 Thread Russ Allbery
Bastien ROUCARIES roucaries.bast...@gmail.com writes:

 Last version of imagemagick fail to build on hpux during build. We
 supsect  a autoconf bug.

 Unfortunatly we have no access to hpux.

 cd .  /bin/sh ./config.status config/delegates.xml
 config.status: creating config/delegates.xml
 cd .  /bin/sh ./config.status config/configure.xml
 config.status: creating config/configure.xml
 ln -s PerlMAgick/quantum/Q16.xs
 usage: ln [-f] [-I] [-s] f1 f2
 ln [-f] [-I] [-s] f1 ... fn d1

That ln command looks like something that you're telling Autoconf to run
with AC_CONFIG_COMMANDS, and it's not portable.  Omitting the destination
argument to ln is a GNU extension.  Try changing that to:

ln -s PerlMAgick/quantum/Q16.xs Q16.xs

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: AX_CREATE_PKGCONFIG_INFO and hardcoded paths

2013-03-27 Thread Russ Allbery
LRN lrn1...@gmail.com writes:

 However, -L/root/lib and -I/root/include are hardcoded, and thus are
 completely wrong (and potentially dangerous, if the system where cloog
 is deployed has /root/include and/or /root/lib directories.

 Is this an AX_CREATE_PKGCONFIG_INFO feature, or cloog does something
 wrong?

The various versions of AX_CREATE_PKGCONFIG_INFO that I've seen all do
things that I consider rather dodgy, such as putting the entire contents
of CFLAGS into the pkgconfig file (including any user-supplied CFLAGS at
configure time and including optimization flags).  I suspect this is
similar.  I'm not sure if there's a newer version of the macro available
that's less aggressive about copying the *entire* build configuration into
the pkgconfig file, but in the meantime I've preferred to construct my
pkgconfig files using sed from Makefile.am so that I have more control
over exactly what goes into them.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Cross-platform availability of header files

2013-03-14 Thread Russ Allbery
Zack Weinberg za...@panix.com writes:

 I think we should try to come up with a principled cutoff for how old is
 too old, though. I started this thinking POSIX.1-2001 (including XSI,
 but maybe not any other options) was a reasonable place to draw the
 line, but it turns out Android omits a bunch of that (and not the old
 junk either) so it's not so simple.

 You can assume a C89 hosted environment does still seem like a sound
 assertion, though.

It's also important not to exclude Windows, which sometimes is missing
some things that are otherwise universal.  Autoconf can't probe on Windows
*directly* (without using one of the UNIX-like portability shells, which
often provide the missing bits as well), but the fact that Autoconf
generates the conditionals (in config.h for example) is still extremely
useful in combination with a hand-written config.h.win that's used by
Windows MSVC builds.

(ssize_t was the thing I ran into most recently that Windows doesn't have
but which is otherwise universal and required by POSIX.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 If a project does not observe proper preprocessor macros for a
 configuration, a project could fall victim to runtime assertions and
 actually DoS itself after the assert calls abort(). The ISC's DNS server
 comes to mind (confer: there are CVE's assigned for the errant behavior,
 and its happened more than once!
 http://www.google.com/#q=isc+dns+assert+dos).

It's very rare for it to be sane to continue after an assert().  That
would normally mean a serious coding error on the part of the person who
wrote the assert().  The whole point of assert() is to establish
invariants which, if violated, would result in undefined behavior.
Continuing after an assert() could well lead to an even worse security
problem, such as a remote system compromise.

The purpose of the -DNDEBUG compile-time option is not to achieve
additional security by preventing a DoS, but rather to gain additional
*performance* by removing all the checks done via assert().  If your goal
is to favor security over performance, you never want to use -DNDEBUG.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-19 Thread Russ Allbery
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 Most of the the -z,blahblah options could be eliminated if the OS and
 toolchain were to arrange to do useful security things by default. They
 could do useful security things by default and flags could disable
 safeguards for rare code which needs to intentionally do the things
 guarded against.

Ubuntu patches gcc to enable a bunch of these options.  Debian discussed
doing the same and decided not to, since Debian really dislikes diverging
from upstream on things that have that much public-facing visibility, and
instead built it into our packaging system.

I think having the toolchain do some of this automatically has been a hard
sell for understandable backwards-compatibility concerns, but that would
certainly be something that could be explored across multiple GNU
projects.  Although one of the problems with making toolchain changes is
that the needs of embedded systems, who are heavy toolchain users, are
often quite different.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-17 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
 I know Drepper objects to the safer string/memory functions, but his
 way (the way of 1970's strcpy and strcat) simply does not work. I
 don't disagree that the safer functions are not completely safe, but I
 refuse to throw the baby out with the bath water.

Having tried both styles, what works even better than replacing strcpy and
strcat with strlcpy and strlcat, or the new *_s functions, is to replace
them with asprintf.  You have to do a little bit of work to be guaranteed
to have asprintf (or a lot of work if you want to support platforms with a
broken snprintf as well), but gnulib will do it for you, and that coding
style is so much nicer than trying to deal with static buffers and
worrying about truncation, particularly if you design the software with
that in mind from the start.  Yes, it's probably slower, but I'll trade
speed for clarity and safety nearly all of the time.

(Or you could also dodge the memory management problems by using a C
framework that supports garbage collection, like APR, but that's farther
afield of this list.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-17 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 Yeah, I think you are right about asprintf (though I have never used it).

 I can't count how many times I've seen silent truncation due to sprint.
 Most recently, I pointed it out on some SE Android patches (Android port
 of SE Linux) that passed by the NSA sponsored mailing list. They went
 unfixed. Amazing.

Silent truncation is the primary reason why strlcpy and strlcat aren't in
glibc.  Both functions are designed to silently truncate when the target
buffer isn't large enough, and few callers deal with that.  This
ironically can actually create other types of security vulnerabilities
(although it's probably less likely to do so than a stack overflow).

asprintf guarantees that you don't have silent truncation; either you run
out of memory and the operation fails, or you get the whole string.  The
cost, of course, is that you now have to do explicit memory management,
which is often what people were trying to avoid by using static buffers.
But it *is* C; if you're not going to embrace explicit memory management,
you may have picked the wrong programming language  :)

strlcpy and strlcat have some benefit in situations where you're trying to
add some robustness (truncation instead of overflow) to code with
existing, broken APIs that you can't change, which I suspect was some of
the original motivation.  But if you can design the APIs from the start,
I'd always use strdup and asprintf (or something more sophisticated like
obstacks or APR pools) instead.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Selecting a C++ standard

2012-10-27 Thread Russ Allbery
Adrian Bunk b...@stusta.de writes:

 Real buildable by C89 or later is rarely used, since due to lack of 
 long long you have no guaranteed 64bit integer type in C89.

Almost none of the software that I work on requires a 64-bit integer type.
(C89 or later is also my default target for the software I write.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Selecting a C++ standard

2012-10-27 Thread Russ Allbery
Adrian Bunk b...@stusta.de writes:
 On Sat, Oct 27, 2012 at 06:45:01PM -0700, Russ Allbery wrote:

 Almost none of the software that I work on requires a 64-bit integer type.
 (C89 or later is also my default target for the software I write.)

 I just tried to build remctl and lbcd with CC=gcc -pedantic-errors,
 and both failed due to them not being pure C89 (some errors are at the
 bottom of this email).

Which version of remctl was this?  The trailing comma in enums was a bug,
but it was already fixed in both 3.3 and in the current master.  I checked
just now with that flag, and remctl builds apart from the string length
issue (once one undoes Autoconf's detection of C99 variadic macro
support).

This:

 server/remctld.c:53:1: error: string length ‘610’ is greater than the length 
 ‘509’ ISO C90 compilers are required to support [-Woverlength-strings]

I do indeed intentionally ignore since I've never seen a system (including
otherwise C89 systems, and even older systems) that actually failed to
compile long string constants.  This limit was in the standard mostly
for microcontroller and other free-standing platforms th

These:

 ./check_reply.c:28:10: error: initializer element is not computable at load 
 time
 ./http.c:35:12: error: initializer element is not computable at load time
 ./monlist.c:86:10: error: initializer element is not computable at load time

are indeed all bugs.  lbcd is software that I didn't originally write and
just finished adopting, and I hadn't done a -pedantic-errors pass on it
yet.  I appreciate the note; I'll go and fix those now.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [PATCH 1/2] AC_CONFIG_MACRO_DIRS: new macro, mostly for aclocal

2012-10-21 Thread Russ Allbery
Andrew W. Nosenko andrew.w.nose...@gmail.com writes:
 On Wed, Oct 17, 2012 at 1:14 AM, Eric Blake ebl...@redhat.com wrote:

 Wrong.  For better or worse, Libtool has a goal of specifically being
 usable even without autoconf or automake.

 Yes, but this goal is missed.  And missed very far and heavy.  At the
 current state, libtool is just a semi-portable automake plugin for
 generation of shared libraries.  May be I'm not too polite, but can
 you, or anyone else, point me to the even one real live non-died
 project, which uses libtool without automake?  (Before you ask: Yes,
 of course, Feel free to forward this message to libtool mailing list)

I know of at least two major open source projects that use Libtool without
Automake (INN and OpenAFS).  As the person who implemented the support for
INN, I can say that it wasn't even particularly hard, although some parts
were somewhat obscure.

I wouldn't actually recommend this approach, particularly if one were
starting from scratch, since Automake offers a lot of nice features and
automation for the integration.  But it's entirely possible and even
desirable if one is (such as is the case for OpenAFS) retrofitting shared
library support onto an old (20+ years in this case) code base with its
own extremely complex build system.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: [RFC] getting rid of the config.guess/sub problem when bootstrapping new ports/systems

2012-10-09 Thread Russ Allbery
Adrian Bunk b...@stusta.de writes:

 One problem is that in new upstream versions of
 autoconf/automake/libtool there are sometimes slight incompatibilities,
 and you end up with shipping many different versions of each of these
 tools (even today Debian already ships 5 different versions of
 autoconf).

 E.g. putting automake 1.13 [1] as automake package into Debian would 
 then require updating thousands of packages just for getting the whole 
 archive to build again.

This is a lot of work, yes.  But personally I think it's valuable work
that we shouldn't shy away from, and with which we can often help
upstream.  We do the same thing each time a new g++ is released; I don't
think there's been a major g++ release that hasn't required some patches
to one of the C++ packages I maintain.

(autoconf-dickey is a separate matter; at this point, I think it's best to
treat that as a permanent fork.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [RFC] getting rid of the config.guess/sub problem when bootstrapping new ports/systems

2012-10-08 Thread Russ Allbery
Paul Wise pa...@bonedaddy.net writes:

 In the meantime, within Debian we will be pursuing both per-package
 updating of config.guess/sub and I'm also thinking about getting our
 binary package build toolchain to take that role, but I'm not sure how
 well that would be received within Debian or how well it would work.

Personally, I've already started converting every package I maintain that
uses Autoconf to using dh_autoreconf during the build.  I wonder if that
isn't a better long-term solution for Debian.  config.guess/config.sub
have caused the most frequent problems, but we've had problems in the past
from old versions of Libtool as well, and dh_autoreconf resolves the
entire problem (at least for packages for which it works).

Paul, separately, if you haven't already, could you request that the
Lintian checks for outdated config.guess/config.sub be updated for
whatever version you need for arm64?  We could also recommend
dh_autoreconf at the same time, which might resolve a lot of these
problems going forward.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [RFC] getting rid of the config.guess/sub problem when bootstrapping new ports/systems

2012-10-08 Thread Russ Allbery
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 Does simple replacement of config.guess and config.sub constitute a
 useful port to this previously unencountered target?

Believe it or not, yes, frequently it does.

Note that this is specifically in the context of Debian, which means that
all of these platforms are Linux and they're all using glibc.  The
variation between systems is therefore much less than one might expect,
and less than a lot of packages using config.guess/config.sub are
adjusting for.  There are a lot of packages that have one case that works
on all Linux systems, and those will generally work fine on a new Debian
architecture as long as config.guess/config.sub doesn't explode when
attempting to determine the triplet.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc files

2012-09-20 Thread Russ Allbery
Bruce Korb bk...@gnu.org writes:
 On 09/19/12 11:03, Russ Allbery wrote:

 ...  I believe the method using sed is
 correct, and generating the file at Autoconf time is not as correct...

 For my perspective, there really ought to be a simple macro that just
 exports all the configured values to whatever external command anyone
 wants to use.  e.g. always have all Makefile-s define this:

 CONFIGURABLES = \
   prefix=$(prefix) \
 

 so that you can, for example, use this in any directory:

$(CONFIGURABLES) $(SHELL) some-script

 and have the right thing happen.

We're probably drifting into Automake territory here, but I would even
take that one step further: wouldn't it be nice if Automake had a standard
facility to do build-time substitution of (fully-expanded) AC_SUBST
variables in a file?  In other words, just create a file with the same
syntax as an AC_OUTPUT file template, but tell Automake to generate it at
build time rather than configure time, and Automake would then take care
of doing the sed for you plus fully expand the variables rather than
leaving them unexpanded the way that config.status does (and has to, to
permit build-time changes to ${prefix}).

That doesn't entirely replace your idea, since your idea would permit
making decisions and using conditionals, but it would certainly be
sufficient for generationg pkg-config files and I suspect it would handle
a number of other cases.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc file

2012-09-20 Thread Russ Allbery
Nick Bowler nbow...@elliptictech.com writes:

 I wonder if all that's needed is an option to override a configure- time
 decision when you run config.status?  Then you could use the
 config.status machinery to perform substitutions by calling it at make
 time, and not have to maintain your own code that does the same thing.
 Then the makefile could have a rule (either automatically provided by
 Automake or hand-written by the user) that looks something like:

config.status also doesn't fully expand the variables, though, which is
something that you'd want for this sort of application.  Otherwise, you
have to know all of the possible ways in which one variable could
reference another so that you can set all of them (and you can only write
files that have a variable resolution system, although pkg-config does
have one).  Note that the user can do things like
--libdir='${datadir}/foo' at configure time, even if it's not common.

So yes, you could use config.status, but only if it also had a flag that
said to fully expand the variables before substituting them in, and I
suspect that's a fair amount of change to the script.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc file

2012-09-20 Thread Russ Allbery
Nick Bowler nbow...@elliptictech.com writes:
 On 2012-09-20 13:14 -0700, Russ Allbery wrote:

 config.status also doesn't fully expand the variables, though, which is
 something that you'd want for this sort of application.  Otherwise, you
 have to know all of the possible ways in which one variable could
 reference another so that you can set all of them (and you can only
 write files that have a variable resolution system, although pkg-config
 does have one).  Note that the user can do things like
 --libdir='${datadir}/foo' at configure time, even if it's not common.

 Ah, but make will fully expand the variables.

Only the ones that you're explicitly overriding.  What about all the other
variables that you may want to use with the same values that were set
during configure?

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc file

2012-09-20 Thread Russ Allbery
Nick Bowler nbow...@elliptictech.com writes:

 However, is this really a problem in practice for anything but
 installation directory variables?  All installation directory variables
 are supposed to be overridable at make time, so they will all need
 explicit overrides (at least all the ones you are intending to
 substitute), and therefore they will all be fully expanded.

Ah, true.

This means that you're always going to pass quite a long list of overrides
to config.status, though, which sort of comes back to my feeling that it
would be nice if Automake supported this sort of thing natively rather
than requiring that people roll it themselves.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc file

2012-09-20 Thread Russ Allbery
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

 Regardless of what is supposed to be supported, make time overrides are
 not as reliable as settings from the configure script since it depends
 on the same overrides being applied each time that make is executed.
 Any variance will not be detected by make, and so already built build
 products won't be re-built and so they will be wrong.  It is possible
 that make is executed several/many times during a build, and also at
 install time.

 That is why I use configure and config.status substitutions to build the
 .pc file for my package. :-)

Thus making the problem much better by *guaranteeing* that the *.pc file
is wrong if there are any make-time overrides!  Er, wait  :)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc files

2012-09-19 Thread Russ Allbery
Vincent Torri vto...@univ-evry.fr writes:

 But it seems that several projects use sed in their Makefile.am to use
 the value of $libdir, $includedir, etc.. to generate their pc files. So
 they have in foo.pc

 libdir=${libdir}

 So I would like to know the opinion of the autoconf dev about what the
 correct way to generate pc file is.

Well, I'm not an Autoconf developer, so feel free to ignore this, but I've
been very happy with the following recipe.  A *.pc.in file that looks
like:

prefix=@prefix@
exec_prefix=@exec_prefix@
includedir=@includedir@
libdir=@libdir@

Name: name
Description: description
URL: url
Version: @PACKAGE_VERSION@
Cflags: -I${includedir}
Libs: -L${libdir} -llibrary
Libs.private: other-libs

with the ... bits replaced with whatever is appropriate for your
library, and then the following in Makefile.am (adjusting file paths
accordingly, of course:

client/libremctl.pc: $(srcdir)/client/libremctl.pc.in
sed -e 's![@]prefix[@]!$(prefix)!g' \
-e 's![@]exec_prefix[@]!$(exec_prefix)!g'   \
-e 's![@]includedir[@]!$(includedir)!g' \
-e 's![@]libdir[@]!$(libdir)!g' \
-e 's![@]PACKAGE_VERSION[@]!$(PACKAGE_VERSION)!g'   \
-e 's![@]GSSAPI_LDFLAGS[@]!$(GSSAPI_LDFLAGS)!g' \
-e 's![@]GSSAPI_LIBS[@]!$(GSSAPI_LIBS)!g'   \
$(srcdir)/client/libremctl.pc.in  $@

Note the last two sed expressions, which show how to handle additional
flags for linking with other libraries.  @GSSAPI_LDFLAGS@ @GSSAPI_LIBS@ is
in the Libs.private part of my *.pc.in file in this case.

This has make expand all the variables for you, so you can safely omit
exec_prefix if you want; nothing else will refer to it, since the
variables will be fully collapsed.  I include it just for the hell of it.

The result, for this particular package as installed on Debian (so with a
multiarch libdir) looks like:

prefix=/usr
exec_prefix=/usr
includedir=/usr/include
libdir=/usr/lib/i386-linux-gnu

Name: remctl
Description: Remote authenticated command execution with ACLs
URL: http://www.eyrie.org/~eagle/software/remctl/
Version: 3.2
Cflags: -I${includedir}
Libs: -L${libdir} -lremctl
Libs.private:  -lgssapi_krb5

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc files

2012-09-19 Thread Russ Allbery
Vincent Torri vto...@univ-evry.fr writes:

 I know that way. I mentioned it ( But it seems that several projects
 use sed in their Makefile.am etc... See above)

 My question is : which solution is the correct one, as they obviously
 don't give the same result ?

I suppose I should have been clearer.  I believe the method using sed is
correct, and generating the file at Autoconf time is not as correct
(although it's workable).

Bastien ROUCARIES roucaries.bast...@gmail.com writes:

 See autoconf-archive mavcros
 http://www.gnu.org/software/autoconf-archive/ax_create_pkgconfig_info.html#ax_create_pkgconfig_info

That macro has gotten better (it at least doesn't put the user's CPPFLAGS
and LDFLAGS into the *.pc file like it used to), but I still would not
recommend people use it.  By default, it still puts all LIBS into Libs,
not Libs.private, which is nearly always wrong and results in excessive,
unnecessary shared library dependencies.  And, as mentioned, it doesn't
support changing prefix at make time.

It also has a weird workaround to force the variables to be expanded,
where it evals each setting multiple times.  There's probably nothing
erroneous about that, but I don't like it as well as letting make just do
the expansion.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: generating pc files

2012-09-19 Thread Russ Allbery
Mike Frysinger vap...@gentoo.org writes:
 On Wednesday 19 September 2012 14:03:51 Russ Allbery wrote:

 That macro has gotten better (it at least doesn't put the user's
 CPPFLAGS and LDFLAGS into the *.pc file like it used to), but I still
 would not recommend people use it.  By default, it still puts all LIBS
 into Libs, not Libs.private, which is nearly always wrong and results
 in excessive, unnecessary shared library dependencies.  And, as
 mentioned, it doesn't support changing prefix at make time.

 send a patch

Why would I do that when I think the correct thing to do is to generate
the file via Makefile.am?  :)  I disagree with the entire approach, so I
don't have much enthusiasm for trying to fix the fixable bugs in an
approach that I think is basically flawed.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: RFE: macro for warning at configure-time if CFLAGS includes -Werror

2012-09-19 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 As a dumb user, I want to use a cookbook. That means I want to do a:

./configure CFLAGS=-Wall -Wextra 

 I don't want to have to learn how to use autoconf, automake, and make.
 I don't want to subscribe to mailing list to make things work. I just
 want it to work as expected.

If you're an end user following a cookbook, you probably should not be
overriding the decisions of the package maintainer and adding additional
warning flags.  Warning flags are useful for more sophisticated users to
detect possible bugs in the software.  Users who are just following
cookbooks and who aren't prepared to debug the software are not going to
gain anything useful by enabling a bunch of optional warnings, let alone
trying to use -Werror.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: RFE: macro for warning at configure-time if CFLAGS includes -Werror

2012-09-19 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 I would like to leave it alone. But *every* FOSS project I've seen
 (and *all* closed source security audits I've performed) neglect the
 security related stuff. That means I have to act because the supply
 chain in under my purview - I have no choice.

Ah, okay, yes, that's a good point.  But -Werror (apart from the one
specifically about format options, which configure probes don't trigger so
far as I know) is not particularly useful from a security perspective.
And even the one for format options doesn't make the software build more
secure; it's a debugging tool to find potential security problems.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Why conditionally include config.h?

2012-09-15 Thread Russ Allbery
Kip Warner k...@thevertigo.com writes:

 Thanks Russ. That was very helpful. Is there a general rule on which
 source files to #include config.h, or is it as simple as any files
 that needs now or may need in the future the information contained in
 it. One exception as previously pointed out would be of course to never
 #include it in non-local / public headers.

I wouldn't go so far as to say that this is a general rule for everyone
using Autoconf, but the way that I do it, which I believe is fairly
common, is:

* All regular source files (*.c files, for example) #include config.h as
  the first non-comment line in the file.

* Internal (non-installed) header files #include config.h (generally
  after the multiple-inclusion header guard) if and only if they use
  Autoconf results directly.  I do not do this if the header file just
  uses data types that Autoconf may rename, only if the header file uses
  the regular HAVE_* symbols and the like.

  I do this only because I'm kind of obsessive about header files being
  self-contained and hence including everything they use.  Since every
  source file has #include config.h before anything else, including
  internal headers, there's really no need to include config.h in internal
  header files.  You may not want to follow my example here.  :)

* Public, installed header files never #include config.h.  In general, I
  try to write all public interfaces so that they don't rely on anything
  outside of C89 or C99 (whatever I'm targetting) and therefore do not
  need any conditional results.

  Sometimes this isn't possible; when it isn't, I generate a separate
  publicly-installed header file that contains the definitions that I need
  but renamed so that they're within the namespace of my project (adding
  FOOBAR_ in front of all the HAVE_ symbols, for example).  I usually just
  roll this myself, but there are various macros to do this in, for
  example, the Autoconf Archive if you have this problem.  I then include
  that header file in other publicly-installed header files.

Packages that violate the latter principal are extremely annoying.  Note
that, because of the various PACKAGE_* defines, any Autoconf-generated
config.h is basically guaranteed to always conflict with any config.h from
another package, so this isn't something you can ever get away with.

For example, when developing Apache modules, I have to generate a
separate, stripped mod-config.h file in Autoconf to use with the module
source files, since they have to include Apache header files as well and
the Apache header files include an Autoconf-generated config.h file with
no namespacing.  This is all very awkward and annoying, so please don't do
that.  :)  (Other common culprits for this are the headers generated by
scripting languages for their dynamically-loaded extension mechanism.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Why conditionally include config.h?

2012-09-13 Thread Russ Allbery
Marko Lindqvist cazf...@gmail.com writes:
 On 14 September 2012 02:43, Eric Blake ebl...@redhat.com wrote:
 On 09/13/2012 05:22 PM, Kip Warner wrote:

 Why do many autoconfiscated projects bracket inclusion of the
 generated config.h with #if HAVE_CONFIG_H / #endif.

 Bad copy-and-paste habits.

  I've seen some packages where same sources are used with multiple build
 systems (typically autotools in more unixy systems and visual studio
 project files on Windows) and it's actually needed to weed out
 config.h include when building with system that does not provide it.
 But more often #ifdef HAVE_CONFIG_H is just idiom copied from some
 other project.

I believe the #ifdef wrapper used to be recommended by the Autoconf manual
way back, many moons ago (2.13 days at the latest), because it was how the
transition from defining things via -D on the command line to using a
header was handled.  It goes along with the definition of @DEFS@, which
previously (and by previously I mean a long time ago) used to contain all
the results of configure as -D flags to the compiler, but which was
replaced by just -DHAVE_CONFIG_H if you used the AC_CONFIG_HEADERS (or
its earlier versions) macro.

So, in other words, you could transition your package that was assuming -D
flags on the compiler command line to using a config.h header by adding
that #ifdef code to the source files, and then it would work with either
Autoconf method: either a config.h file or direct -D flags on the compiler
command line.

I suspect the above is what's happening when you see this in really old
projects, and in newer projects it's copy and paste from older projects.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Problems Configuring (C Compiler cannot produce executables)

2012-08-23 Thread Russ Allbery
Miles Bader mi...@gnu.org writes:
 Russ Allbery r...@stanford.edu writes:

 Also, you should generally not add -Wall -Wextra to the configure
 flags, and instead add it after configure completes, since many of the
 tricks configure has to use will result in warnings when you turn on
 all the compiler warnings, which can confuse configure.

 How can that confuse configure?

 AFAICT, configure seems quite unconcerned with warnings during
 configuration.

I may be misremembering previous discussions here, and the fact that we do
indeed seem to pass -Wall into configure all the time without any trouble
makes me think that I am misremembering, but I thought there were some
checks where (due largely to various broken vendor compilers) configure
had to analyze the compiler output to figure out if things went wrong.

It's possible that I'm conflating this discussion with cases where people
use -Werror, which has more obvious issues.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Problems Configuring (C Compiler cannot produce executables)

2012-08-23 Thread Russ Allbery
Ralf Corsepius rc040...@freenet.de writes:

 No idea. The working priciples of standard autoconf checks are based on
 evaluating compiler errors only and to ignore warnings[1], therefore -Wall
 -Wextra must not desturb by definition.

 However, adding -Werror to CFLAGS is dangerous, because this will raise
 GCC warnings to errors, which will cause autoconf to become confused and
 to produce bogus results.

 Ralf

 [1] There exist (non-standard) autoconf checks which are based on
 evaluating compiler warnings. If properly written, these also should not
 be affected by -Wall -Wextra, ... if they are, these checks need to be
 considered broken ;)

I've clearly just misremembered, then.  Apologies for the noise.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-22 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 Here are the flags I am interested in. Again, the developers generally
 don't supply them (it compiles, so ship it!). I'm interested in
 warnings too because I need to see dumb, CompSci 101 mistakes such as
 ignoring return values, truncation problems, and conversion problems.
 When I find them, I need to fix them because developers don't care about
 these things (it compiles, so ship it!)

 EXECUTABLE:
 -Wall -Wextra -Wconversion -fPIE -pie -Wno-unused-parameter -Wformat=2
 -Wformat-security -fstack-protector-all -Wstrict-overflow
 -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now

 SHARED OBJECT:
 -Wall -Wextra -Wconversion -fPIC -shared -Wno-unused-parameter
 -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow
 -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now

*If* the package uses libtool, which I realize is a big if, just pass
-fPIE in CFLAGS and don't worry about the difference.  Libtool is already
adding -fPIC -shared when building the shared objects, and is smart enough
to drop -fPIE from the shared objects as pointless.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-22 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 $ ./configure CFLAGS=-Wall -Wextra -Wconversion -fPIE
 -Wno-unused-parameter -Wformat=2 -Wformat-security
 -fstack-protector-all -Wstrict-overflow -Wl,-pie -Wl,-z,noexecstack
 -Wl,-z,relro -Wl,-z,now

The thing that jumps out at me as different between what Debian uses for
its normal hardening flags and what you're using is the -Wl,-pie flag in
CFLAGS.  Debian just uses -fPIE in CFLAGS and then adds -fPIE -pie to
LDFLAGS.  I'm not sure if that would make a difference.

You in general want to avoid ever using -Wl if you can help it, since
you're hiding the flag from the compiler by using that.  If the compiler
needed to know that you were linking that way so that it could do other
magic itself, you break that support by using -Wl.

Here's what Debian is using:

CFLAGS=-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security
CPPFLAGS=-D_FORTIFY_SOURCE=2
CXXFLAGS=-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security
FFLAGS=-g -O2
LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now

Also, you should generally not add -Wall -Wextra to the configure flags,
and instead add it after configure completes, since many of the tricks
configure has to use will result in warnings when you turn on all the
compiler warnings, which can confuse configure.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-22 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 Debian does a good job. I think there is room for improvement (such as
 DEP and ASLR), and hope the maintainers stiffen their security posture
 in the future. The idea: make it secure out of the box, and let those
 who want to shot themselves in the foot do so. For example, apply
 -z,noexecstack out of the box, and let folks turn it off with
 -z,execstack.

Right.  Debian took a fairly conservative approach (in fact, pie and
bindnow are off by default, but can be easily turned on) because we were
trying to do something archive-wide without having to make a lot of
special exceptions.  Being able to turn of executable stack as at least
another easily-accessible option is an interesting idea, and I may raise
that on debian-devel.  (Although it can be a little hard to predict which
packages need that.  Hm, and I seem to recall that GCC does some stuff
with executable stack automatically.)

 This was a very good point and I had to think about it for a while.

 Are there Autoconf variable for this? For example, rather than:
   ./configure CFLAGS=... CXXFLAGS=...

 could we instead use Autoconf defined stuff:
   ./configure ac_warnings=-Wall -Wextra -Wconversion \
 ac_cflags=-fstack-protector-all... \
 ac_so_flags=... ac_exe_flags=...

There are not, at least so far as I know.

It's a little tricky to add the flags after the fact unless you override
all of CFLAGS at build time and provide the full set of hardening flags
again.  One of the standard tricks is to override CC instead, with
something like:

make CC=gcc -Wall -Wextra

 Autoconf could use ac_cflags as it being used now(?) and save
 ac_warnings for later use (by Automake?) when real source files are
 compiled.

It would be nice to have some additional support directly in standard
Autoconf macros for handling compiler warning flags, although I suspect
there is stuff in both the macro archive and in gnulib.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-21 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 I want hardened executables and shared objects. That includes ASLR,
 which means -fPIE -pie for executables; -fPIC and -shared for shared
 objects. According to the dialog from the GCC feature request, -fPIC and
 -shared should be used as it appears to be a superset of -fPIE -pie.

-fPIC is only for libraries.  For executables, such as what's created by
configure, you want -fPIE.  See, for example, the documentation for how to
deploy hardening flags in Debian (as one of many examples of distributions
doing this that I just happen to be familiar with personally):

http://wiki.debian.org/Hardening/

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-21 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 According to Pinksi at GCC, -fPIC can be used for both. Both -fPIC and
 -fPIE produce a relocatable section. I know from experience readelf(1)
 produces the same result (DYN).

 When using -fPIE, the optimizer can begin optomizing sooner. Andrew
 Pinski (GCC developer): With PIE, global variables and functions are
 considered to bind local while with PIC they are considered to bind
 globally (aka override able). [1]

 Pinski specifically recommended -fPIC because of this situation
 (inability to configure executables and shared objects separately when
 using the GNU tool chain).

Well, all that's fine and good, but then you passed those flags into GCC
and they didn't, er, work.  :)  So reality seems to have come into
conflict with the advice you got.

This definitely isn't Autoconf's fault, at least.

I suspect the actual problem may be more the -Wl,-shared than the -fPIC,
since ld -shared specifically means that you are *not* creating an
executable, but rather are creating a shared library:

   -shared
   -Bshareable
   Create a shared library.  This is currently only supported on ELF,
   XCOFF and SunOS platforms.  On SunOS, the linker will automatically
   create a shared library if the -e option is not used and there are
   undefined symbols in the link.

But you're passing it *only* to the linker (via -Wl), not to the compiler,
so the compiler and the linker now disagree on whether the result is going
to be a shared library or an executable, and badness happens.

So, well, don't do that.  :)

I know for certain that the Debian set of hardening flags, which use
-fPIE, not -fPIC, for executables, work across a *very large* array of
open source software (although we do have to omit -fPIE from the default
set since -fPIE breaks some software), and I believe that other
distributions do the same.  I won't venture to express an opinion on the
relative merits of -fPIC versus -fPIE, particularly to compiler experts,
but in my humble opinion you should prefer flags that actually function.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [autoconf] Problems Configuring (C Compiler cannot produce executables)

2012-08-21 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 I think the solution is to update Make so that there are separate
 LD_EXE_FLAGS and LD_SO_FLAGS. But that was rejected too...

Yeah, I understand both the reason why the idea was rejected and the
reason why it's appealing.

Autoconf isn't the place to look for new flags to pass only to shared
library links (which I think would be the right way to go about that), but
the Automake folks might be interested in talking that over.  I've run
into a few places where I'd like to do that as well (for example,
-Wl,-Bsymbolic).

Currently, the way this works in the Automake world is that the shared
library builds are done via Libtool, which adds the appropriate additional
flags for shared libraries on the local platform.  This generally works
quite well, but so far as I know there isn't a good way to do that
globally across the project.  You can set individual flags for specific
libraries with the _LDFLAGS setting in Automake, but that's for the
developer (since it requires modifying Makefile.am), not for the person
doing the compile.

This would be something that would need to be added to Automake, at least
as I understand it.

In any event, I don't think passing -Wl,-shared into a configure script is
ever going to make sense unless I'm missing some subtlety.  That flag is
specifically for creating shared libraries, and if the package is already
building a shared library, it's (necessarily) already going to add that
flag in exactly the places where it's appropriate.

Note that it is safe to pass -fPIE globally via CFLAGS even if shared
libraries (which must use -fPIC instead of -fPIE) are in play provided
Libtool is in use, since Libtool is smart enough to not pass -fPIE into
shared library builds.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_C_NORETURN macro?

2012-04-26 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:
 On 04/26/2012 09:02 AM, Vincent Lefevre wrote:

 Almost no compilers/systems are compliant to C99. For instance, under
 Linux, GCC defines unix and linux to 1, while they are not
 reserved.

 Even when in --std=c99 mode?

That, or -ansi, make GCC and glibc compliant, but almost no one uses them.
The experience of trying to use them with a large project will quickly
show why.  It's almost impossible to get the right set of feature-test
macros defined to let the code build without causing other strange
problems on other platforms, due to bugs in their strict standard
compliance mode or due to other weirdness (try, for example, requesting
_XOPEN_SOURCE 600 on Solaris, and you'll find you have to go to some
lengths to ensure that the source code and the compiler are configured for
consistently using the same standard or the system header files will throw
fatal errors and abort the compile).

I've done the experiment from time to time of supporting -ansi or
--std=c99, but even for small code bases I consider it the kind of thing
that one does as a hobby or out of curiosity.  It's not a very good way to
actually get work done and write code that is portable on a practical
level (meaning that people on multiple UNIX platforms can just run
./configure  make).

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: [FYI] {master} refactor: use modern semantics of 'open'

2012-04-24 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Help!  I can't release autoconf 2.69 until I figure out how to work
 around this patch.  After updating to the latest shared files, as well
 as applying this patch, I'm now stuck with output going to a literal
 file named '-' instead of going to stdout.  I suspect that the
 conversion to the 2-arg form is mishandling our idiom of '-' as standard
 in/out.

If you call open with three arguments, - has no special meaning and
refers to a file named - (since the whole point of three-argument open
is to remove all magic interpretations of the filename string).  The
easiest way to work around this is probably to change the Automake helper
functions that sit between the code and the Perl open command and have
them switch to calling open with two arguments if the file name is -.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: How to force location of headers

2012-04-14 Thread Russ Allbery
James K. Lowden jklow...@schemamania.org writes:
 Russ Allbery r...@stanford.edu wrote:

  is primarily a source of bugs and annoyances

 Based on what?

Problems just like the one on this thread, or the breakage Debian saw with
multiarch and headers that relied on , or similar problems that we've
seen over the years with software using -I- to try to work around problems
with .  I've also seen it cause problems with non-recursive make and
with VPATH builds with different compilers that disagreed over what the
local directory is for  in different situations.

See also the Autoconf manual advice for the config.h header, which gives
yet another reason:

   To provide for VPATH builds, remember to pass the C compiler a
`-I.'  option (or `-I..'; whichever directory contains `config.h').
Even if you use `#include config.h', the preprocessor searches only
the directory of the currently read file, i.e., the source directory,
not the build directory.

   With the appropriate `-I' option, you can use `#include
config.h'.  Actually, it's a good habit to use it, because in the
rare case when the source directory contains another `config.h', the
build directory should be searched first.

and indeed it was that advice that pushed me towards getting rid of  in
my projects many years ago.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: How to force location of headers

2012-04-12 Thread Russ Allbery
Kabi karste...@gmail.com writes:

 I am using AM_CFLAGS to include directories where make should look for
 headers, but only headers surrounded by  when included is
 overridden. make is still looking for headers surrounded byt  in the
 directory with the projects source .c files.

If you can modify the source, get rid of all uses of  in #include.  I
would recommend never using  if you can avoid it.  Once you have a good
build system that can handle adding the appropriate -I flags for the
compiler,  is primarily a source of bugs and annoyances and not really a
help.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


socket code on Windows (was: AC_INCLUDES_DEFAULT does not need to probe for headers that are part of C89 or POSIX.1 anymore)

2012-04-12 Thread Russ Allbery
Zack Weinberg za...@panix.com writes:

 The least-POSIXy environment this program is ever likely to be ported to
 is Windows -- advice on how to make code that assumes BSD socket headers
 compile with Winsock would be significantly more useful to me.

Gnulib probably has a bunch of other functionality to help with this, but
here are the core portability issues that I'm aware of:

* Socket functions on Windows set a different error that you have to
  retrieve from WSAGetLastError() and set with WSASetLastError() rather
  than using errno.  strerror obviously doesn't work also.

* Socket file descriptors are an opaque type, not an int, and you have to
  check against INVALID_SOCKET rather than -1 for errors.

* read/write cannot be used on sockets.  You have to use recv/send.
  Similarly, you cannot use close on sockets; you have to use closesocket.

* Applications using sockets have to do a one-time startup.

* The header files are obviously different.

Here's the core of portability for most TCP applications:

#ifdef _WIN32
# include winsock2.h
# include ws2tcpip.h
#else
# include netinet/in.h
# include arpa/inet.h
# include netdb.h
# include sys/socket.h
#endif

#ifndef HAVE_SOCKLEN_T
typedef int socklen_t;
#endif

#ifdef _WIN32
int socket_init(void);
# define socket_shutdown()  WSACleanup()
# define socket_close(fd)   closesocket(fd)
# define socket_read(fd, b, s)  recv((fd), (b), (s), 0)
# define socket_write(fd, b, s) send((fd), (b), (s), 0)
# define socket_errno   WSAGetLastError()
# define socket_set_errno(e)WSASetLastError(e)
const char *socket_strerror(int);
typedef SOCKET socket_type;
#else
# define socket_init()  1
# define socket_shutdown()  /* empty */
# define socket_close(fd)   close(fd)
# define socket_read(fd, b, s)  read((fd), (b), (s))
# define socket_write(fd, b, s) write((fd), (b), (s))
# define socket_errno   errno
# define socket_set_errno(e)errno = (e)
# define socket_strerror(e) strerror(e)
# define INVALID_SOCKET -1
typedef int socket_type;
#endif

and you then have to use socket_type and the above macros everywhere
instead of using the regular functions directly.

socket_init has to be called once and is:

int
socket_init(void)
{
WSADATA data;

if (WSAStartup(MAKEWORD(2,2), data))
return 0;
return 1;
}

on Windows.  socket_strerror is:

const char *
socket_strerror(err)
{
const char *message = NULL;

if (err = sys_nerr) {
char *p;
DWORD f = FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM
| FORMAT_MESSAGE_IGNORE_INSERTS;
static char *buffer = NULL;

if (buffer != NULL)
LocalFree(buffer);
if (FormatMessage(f, NULL, err, 0, (LPTSTR) buffer, 0, NULL) != 0) {
p = strchr(buffer, '\r');
if (p != NULL)
*p = '\0';
}
message = buffer;
}
if (message == NULL)
message = strerror(err);
return message;
}

on Windows (obviously not threadsafe; you have to do other things if you
need to be threadsafe).

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: AC_INCLUDES_DEFAULT does not need to probe for headers that are part of C89 or POSIX.1 anymore

2012-04-04 Thread Russ Allbery
Zack Weinberg za...@panix.com writes:

 which, as well as the desired check, does

 checking for ANSI C header files... yes
 checking for sys/types.h... yes
 checking for sys/stat.h... yes
 checking for stdlib.h... yes
 checking for string.h... yes
 checking for memory.h... yes
 checking for strings.h... yes
 checking for inttypes.h... yes
 checking for stdint.h... yes
 checking for unistd.h... yes

 The only tests in that list that are worth doing nowadays are for
 stdint.h and inttypes.h, and I don't think they should be done
 implicitly.

I think you're assuming a hosted target.  I'm not sure that you can make
that assumption.  Can't Autoconf be used to cross-compile software for a
free-standing target, where several of those header files are not
required to exist?

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: Why I am happy to dump gzip for xz

2012-03-06 Thread Russ Allbery
Jim Meyering j...@meyering.net writes:

 If you were more intimately familiar with gzip's code, you would have
 switched long ago ;-)

[...]

Thanks for this.  I hadn't realized the issues with the gzip code.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Autoconf distributions and xz dependency

2012-03-02 Thread Russ Allbery
Warren Young war...@etr-usa.com writes:

 I went through both the .Z - .gz and .gz - .bz2 transitions.  I recall
 a longer overlap period where major archive sites had everything in both
 the new and previous forms.

At least in my corner of the software world, no .gz - .bz2 transition
never happened.  I see occasional use of .bz2, but it's a definite
minority.

 I don't much care if .gz goes away now, as .Z did before it.  I'd like
 to see a .bz2 option for everything I have to manually untar for at
 least another few years.

.Z went away because of annoying software patent issues at the time, which
was the compelling case for gzip.

Personally, I fail to see a similar compelling case for xz.  It's a much
more complex but nicer and more powerful compression algorithm, sure.  But
there doesn't seem to be any horribly important reason to use it for
typical open source software distributions where the data size is quite
small, as opposed to, say, scientific data sets where the savings could be
substantial.

I'm planning on looking at it seriously for log compression, where I want
to squeeze GBs (or more) of data into smaller disk, but so far for my own
projects I haven't seen any reason to move off of xz.  My typical
free software distribution is on the order of 200KB to 2MB, at which point
any savings from xz is purely noise and the disruption of switching to
something new that not everyone has readily available seems overriding.

It's fine with me for Autoconf to do whatever the Autoconf maintainers
feel like doing; part of the fun of free software is doing things the way
that you want to do them whether or not other people agree with the logic
of your case.  :)  Or just because it's fun.  I use Debian and have xz and
an appropriate GNU tar readily available and it's all the same to me.  But
I'm fairly unconvinced by the larger argument that free software
developers should move to xz and away from gzip.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Minor autoreconf manual typo

2011-07-21 Thread Russ Allbery
Someone mentioned in another forum that I read that the autoreconf
Invocation node of the Autoconf manual has a typo: -V (upper-case) is
shown as the short option for both --version and --verbose.  From the
--help output, it looks like this should be -v for --verbose.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: wacky warning in 2.68...

2011-05-18 Thread Russ Allbery
Miles Bader mi...@gnu.org writes:

 Recently I've started getting odd warnings from autoconf, and I'm not
 sure how to shut them up... e.g., from the following configure.ac
 file:

[...]

AC_COMPILE_IFELSE(AC_LANG_SOURCE([int x;]), [opt_ok=yes], [opt_ok=no])

[...]

 I get the following warnings from running autoconf:

configure.ac:24: warning: AC_LANG_CONFTEST: no AC_LANG_SOURCE call 
 detected in body
../../lib/autoconf/lang.m4:194: AC_LANG_CONFTEST is expanded from...
../../lib/autoconf/general.m4:2591: _AC_COMPILE_IFELSE is expanded from...
../../lib/autoconf/general.m4:2607: AC_COMPILE_IFELSE is expanded from...
configure.ac:7: BARF_CHECK_CXX_FLAG is expanded from...
configure.ac:24: the top level

 AFAICS, I _am_ using AC_LANG_SOURCE, so ... I dunno why it's
 complaining...

You're missing a level of quoting.  That should be:

AC_COMPILE_IFELSE([AC_LANG_SOURCE([int x;])], [opt_ok=yes], [opt_ok=no])

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: wacky warning in 2.68...

2011-05-18 Thread Russ Allbery
Miles Bader mi...@gnu.org writes:
 Russ Allbery r...@stanford.edu writes:

 You're missing a level of quoting.  That should be:
 AC_COMPILE_IFELSE([AC_LANG_SOURCE([int x;])], [opt_ok=yes], [opt_ok=no])

 Hmm, ok.

 [I'm not sure if I'm _ever_ going to really have any intuitive sense for
 quoting in autoconf]

This one was a little weird, and the only reason why I know it off the top
of my head is that it's been a frequent report.  I think there was an
explanation on the list a while back about how that error message pops out
of the internals with that problem.

But in general you're not going to go wrong by just single-quoting every
argument of every Autoconf macro always.  Once I started doing that, most
of these odd problems went away, since the legacy stuff that didn't work
properly with quoting is mostly gone now.  The only exception is that
sometimes I double-quote with [[ ]] for literal text with [] in it.  Once
in a blue moon there's some sort of bug with overquoting.  But nearly all
the weird problems people run into with Autoconf understanding its input
are from underquoting some macro argument somewhere.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: HAVE_STDBOOL_H, AC_HEADER_STDBOOL, and AM_STDBOOL_H

2011-02-01 Thread Russ Allbery
(Please cc me on responses as I'm not a member of the bug-gnulib mailing
list.)

Simon Josefsson si...@josefsson.org writes:
 Ralf Corsepius ralf.corsep...@rtems.org writes:

 For real-world projects, gnulib often is not a viable alternative.

 Could you explain why?  There are several real-world projects that use
 gnulib, so I'm curious what the perceived reasons against it are.  I'm
 genuinely interested in the answer to the question, it is not just
 rethoric because I happen to disagree.

Most of the code in gnulib is covered by the LGPL.  All of my projects are
released under the MIT/X Consortium/Expat license or a two-clause BSD
license.  Including LPGL code in such a project is possible, but it's
rather annoying on multiple fronts: I have to include a long and complex
license that only applies to a small handful of files in the tree but has
the potential to confuse users, it's somewhat unclear with some small
gnulib modules to what extent they're really a separate library (in
particular, it's quite difficult to meet the LGPL requirements to allow
relinking with a modified version of the gnulib files), and it causes a
lot of mental overhead and analysis impact for anyone who wants to use my
software in other ways.

I release my software under those licenses precisely because I don't want
people to have to spend a lot of time thinking about how they can and
can't use my software.  I realize this is a philosophical difference, and
I do fully respect the goals of copyleft and am *not* opposed to copyleft.
I've just chosen to take an even more permissive approach for my
particular projects.  Incorporating LGPL-covered code like gnulib makes my
life considerably more complex for only marginal gain (I already have a
portability layer of my own that, while not as comprehensive, is
sufficient for my needs).

Autoconf is released under an excellent license for my purposes.  I know
that I can use any Autoconf macros in whatever way I need in BSD-licensed
software without any additional impact.

I'm *not* asking gnulib contributors to stop contributing to gnulib or
improving it.  People certainly can release code under any license they
choose.  I'm only asking that macros already included and working in
Autoconf continue to be maintained there at least in the sense that if
people contribute improvements, those improvements wouldn't be rejected
because the macro is obsolete and gnulib should be used instead.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


Re: HAVE_STDBOOL_H, AC_HEADER_STDBOOL, and AM_STDBOOL_H

2011-01-31 Thread Russ Allbery
Paul Eggert egg...@cs.ucla.edu writes:

 In Gnulib:

 * Rename gnulib's AC_HEADER_STDBOOL to gl_HEADER_STDBOOL.
 * Remove the AC_DEFINE([HAVE_STDBOOL_H], ...) from gl_HEADER_STDBOOL.
 * Rename gnulib's AM_STDBOOL_H to gl_STDBOOL_H.

 In Autoconf:

 * Mark AC_HEADER_STDBOOL as obsolescent, and suggest to
   Autoconf users that in the future they use gnulib if they want to
   be portable to pre-C99 hosts with respect to stdbool.h.

Please don't make this last change in Autoconf.  AC_HEADER_STDBOOL in
Autoconf works well right now for people who do not use gnulib, and I
don't think that it's a good idea to mark obsolescent a working Autoconf
macro to try to push people towards using gnulib instead.  The Autoconf
manual already spells out exactly how to use AC_HEADER_STDBOOL correctly,
and those instructions work fine, without requiring importing any code
from yet another project.

In general, please continue to support straight Autoconf users with the
macros that are in Autoconf.  It feels like there's a somewhat disturbing
long-term trend here towards pushing everyone who uses Autoconf into also
using gnulib, even if Autoconf-using projects are not particularly
interested in or set up to use gnulib.

I'm of course agnostic about the gnulib changes, since I don't use gnulib
and will happily leave discussion of that to those who do.  But given that
you're proposing renaming macros in gnulib, I don't see any need to make
the Autoconf change as well, since the gnulib macros will have different
names and will therefore not need to retain compatibility with the
same-named macros in Autoconf.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_FUNC_STRTOD

2010-09-17 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Marking a macro obsolete in autoconf means that new code should not rely
 on it, but that the macro still exists and still does the same thing it
 used to do, so that old code that used it will continue to work.

Oh, okay, I misunderstood obsolete.  Never mind, then.  :)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: man pages error

2010-09-08 Thread Russ Allbery
Daniel U. Thibault d.u.thiba...@sympatico.ca writes:

 The man pages for autoconf state:

 #
 Otherwise, you may be able to access the Autoconf manual
 online. /usr/share/doc/autoconf/autoconf.html for more information.
 #

There is no /usr/share/doc/autoconf/autoconf.html file.  Instead, one
 must install the autoconf-doc package (separately) and then look up
 /usr/share/doc/autoconf-doc/autoconf.html.

This is a bug in the man page in the Debian version of the package.  The
last sentence of the SEE ALSO section seems to be missing some words.  The
intent was to point to the autoconf-doc package for that file.

Could you report this against autoconf via reportbug?

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Relative paths for $INSTALL

2010-07-21 Thread Russ Allbery
We recently ran into some problems with OpenAFS where the relative path to
the install-sh script that Autoconf (config.status) substitutes into files
on systems without a good system install program (such as Solaris) caused
issues.  One of those involved calling a script created by config.status
in a loop that used cd, and the other was a place where our build system
symlinks a makefile to another makefile in a different directory.

There is usually always some sort of build system reconfiguration or
refactoring that one can do to make this work with relative paths, but
it's kind of annoying.  The first instinct of other project developers is
to play various games with AC_CONFIG_AUX_DIR to try to force it to be an
absolute path, but I'm worried we're going to do something that will break
later.

In doing some web searches, I see that GCC ran into the same problem and
changes the value of $INSTALL in configure to use an absolute path, which
is generated with `cd $srcdir ; pwd`.  But there too this seems too
fragile, since detecting the case of an Autoconf-set path to install-sh
may fail later.

So, in short, it would be very nice if there were some way to force
Autoconf to use absolute paths when substituting paths to scripts in the
aux directory into generated files.  Is there any chance that Autoconf
could add an AC_PROG_INSTALL_ABS or some other way to say that the
substituted path to install-sh needs to be an absolute path?  I think
that's the main program affected; config.sub and config.guess are run
internally by Autoconf in ways that don't have this problem, and the other
helper programs are generally Automake's business and Automake handles
generating the right make rules internally.  (OpenAFS uses Autoconf but
not Automake, as it has a large and complex build system that does some
things that Automake can't easily handle, such as build kernel modules for
nearly a dozen versions of UNIX.)

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: determining 32 v. 64 bit compilatio

2010-06-29 Thread Russ Allbery
Wesley Smith wesley.h...@gmail.com writes:

 For 64bit builds, I need to #define x86_64 for a lib I'm using.  It has
 nothing to do with the size of longs from my perspective, but whether
 the binary is compiled for a 64 or 32 bit target.

I think what everyone is struggling with here is that you're asking a
question that seems straightforward to you, but which from a portability
perspective is not a valid question.  There are no simple 64-bit or
32-bit labels that you can apply to things.  Do you mean the size of a
pointer?  The size of a long?  The size of size_t?  The native word size
of the underlying hardware?  Or is that question being used as a proxy for
whether the processor supports an x86_64 instruction set?

I suspect that you're dealing with all the world is Linux code, and
therefore doing something like:

AC_CHECK_SIZEOF([long])
AS_IF([test $ac_cv_sizeof_long -eq 8],
[AC_DEFINE([OSBIT], 64, [Define to the size of a long in bits.])],
[AC_DEFINE([OSBIT], 32, [Define to the size of a long in bits.])])

would do what you want, but you won't find something exactly like that
built into Autoconf since it's not generally a meaningful thing to do on a
broader variety of platforms.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


Re: Comment on introduction pages

2010-06-03 Thread Russ Allbery
Eric Blake ebl...@redhat.com writes:

 Thanks for the report.  However, English is one of those silly languages
 where the pronoun his can have a neuter sense rather than masculine,
 and this is one of those cases.  Politically correct pundits are trying
 to eradicate that usage, but personally, I'm still of the opinion that
 his looks better than his/hers, as long as you understand that the
 usage is not locking down the gender of the antecedent.

The long-standing gender-neutral pronoun in English is singular their,
as used by such people as Jane Austen.  I would rewrite the sentence as:

The developer expresses the recipe to build their package in a
Makefile

I realize that also bothers some people who are overly well-trained in the
specific style of English forced by Latin prescriptivists during a short
portion of the history of the language, but it's grammatically correct in
English and has been for hundreds of years.

In general, please reconsider your position stated above.  Small things
like this discourage women from participating in open source projects in
little ways, and those little discouragements add up over time.  It's a
very minor thing to change to make someone feel more welcome by not
literally writing their gender out of the manual, and the reward is far
stronger than the small loss of perceived elegance of wording.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/



Re: Prefix for file in source directory used in ./configure?

2010-04-24 Thread Russ Allbery
Jason Sewall sew...@cs.unc.edu writes:

 I've searched the docs and the web for info on this, but I can't seem to
 make headway.

 I have a script in the same directory as my configure.ac that tries to
 clean what commit in my VC we're building.

 GIT_VERSION=`./GIT-VERSION-GEN`

 Obviously, that './' in there doesn't work if I try to configure outside
 my source directory. What variable do I need add as prefix to that path
 to let builds outside the source tree find that script?

If that reference is in a Makefile.am or something similar, you want
either $(abs_top_srcdir) or $(top_srcdir).  I tend to use the abs_*
versions as a matter of course since they avoid weird issues.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/


___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_CHECK_FUNCS and gcc with -Werror

2010-03-04 Thread Russ Allbery
Steffen Dettmer steffen.dett...@googlemail.com writes:

 Do you have some suggestions what tools could help to do such
 nightly builds?

I'm afraid the scale of development I usually do is a bit smaller, and a
cron job that does a git pull, builds the software, runs make warnings and
make check, and mails out the log file if anything fails is sufficient for
what I've done.  I just wrote a simple one myself.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/


___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


Re: AC_CHECK_FUNCS and gcc with -Werror

2010-03-03 Thread Russ Allbery
Peter Breitenlohner p...@mppmu.mpg.de writes:
 On Wed, 3 Mar 2010, Steffen Dettmer wrote:

 ohh sorry, I expressd myself wrongly. As I already wrote Eric,
 for us it does not matter how -Werror is switched internally,
 only that it is set within Makefile.

 (1) one way to do that is to append -Werror to CFLAGS after doing all
 sorts of tests in configure.ac.  The tests always use the current value
 of CFLAGS.

 (2) as noted by others, appending a mandatory(?) flag to CFLAGS is a bad
 idea (and contradicts the GNU coding standards).  CFLAGS is one of the
 variables passed from configure to Makefile that can also be specified
 on the Make command line, i.e., `is reserved for the user'.

What I do for my projects is add a separate warnings target to my Automake
Makefile.am:

# A set of flags for warnings.  Add -O because gcc won't find some warnings
# without optimization turned on, and add -DDEBUG=1 so we'll also compile all
# debugging code and test it.
WARNINGS = -g -O -DDEBUG=1 -Wall -W -Wendif-labels -Wpointer-arith \
-Wbad-function-cast -Wwrite-strings -Wstrict-prototypes \
-Wmissing-prototypes -Wnested-externs -Werror

warnings:
$(MAKE) V=0 CFLAGS='$(WARNINGS)'
$(MAKE) V=0 CFLAGS='$(WARNINGS)' $(check_PROGRAMS)

The coding style standard then requires that all code compile with make
warnings before being committed, but that way the distributed code to the
end user doesn't enable aggressive warnings and -Werror, since normally
new warnings on the end-user system are better skipped than used to prompt
a build failure.

Overriding CFLAGS as this target does is not good practice for any target
that would be used by the end user, since the user may have set CFLAGS to
something else, but it's fine for targets like this that are generally
only run by developers.

This approach also lets me use gcc-specific warning flags since I know the
developers will be using gcc, as opposed to end users who may be using a
wide variety of other compilers.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/


___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf


  1   2   3   >