Re: automake-1.16.92 released

2024-06-30 Thread Dave Hart
On Sun, 30 Jun 2024 at 05:03, Nick Bowler  wrote:

> [...]
>
> autoreconf decides to run libtoolize based on an m4 trace right after
> it runs aclocal, looking for expansion of LT_INIT (or the older
> AC_PROG_LIBTOOL).
>
> This means that for this to work at all:
>
>  (1) a macro definition of LT_INIT must be available at this time, and
>  (2) the LT_INIT macro must actually be expanded directly, and
>  (3) tracing must not be disabled.
>
> Normally aclocal will take care of (1).  Since this tool is part of
> Automake, this is something you have changed in your setup so it's
> plausible this is the underlying cause of your problem.
>
> aclocal works basically by grepping your configure.ac and all the files
> it knows about looking for things that look like macro definitions and
> things that look like macro expansions, and copying in any missing
> definitions.  So for this to work:
>
>   (1.1) aclocal must know where to find the definition of LT_INIT.
>   (1.2) aclocal must see the place where LT_INIT is expanded.
>
> Normally, aclocal and libtool are installed to the same prefix, libtool
> will install its macros into the default aclocal search path, and
> aclocal will find the macro definitions.  If they are installed into
> different prefixes, aclocal will need help, you can use the dirlist
> mechanism (recommended for a permanent installation) or for a quick fix,
> set the ACLOCAL_PATH environment variable to the installed location of
> the libtool macros.
>

Thanks for explaining this, Nick, and in fact the problem was
self-inflicted as I was installing the prerelease Automake versions to a
different prefix than libtool (and the base automake) without taking steps
to help aclocal find the libtool macro definitions.

After installing Libtool to the same prefix as the prerelease Automake, the
problem disappeared.

Cheers,
Dave Hart


Re: automake-1.16.92 released

2024-06-29 Thread Dave Hart
On Sat, 29 Jun 2024 at 21:29, Karl Berry  wrote:

> Indeed. Thank you very much for the report (and the followup). The first
> question that comes to mind: are you using the same version of libtool
> in the various cases? --thanks again, karl.
>

Yes, on both systems it's libtool 2.4.6

I saw no interesting differences between various Automake versions after
1.16.5, nor with Autoconf 2.71 vs. 2.72.  Let me know if there's more I can
do to home in on the problem.

-- 
Cheers,
Dave Hart


Fwd: Automake 1.16.90 regression mistakenly "not using Libtool"

2024-06-29 Thread Dave Hart
It seems debbugs.gnu.org is down or running behind, so here's what I've
found.  Further testing after I composed the report shows the Automake
1.16i prerelease also suffers the problem.

-- Forwarded message -----
From: Dave Hart 
Date: Sat, 29 Jun 2024 at 17:18
Subject: Automake 1.16.90 regression mistakenly "not using Libtool"
To: 


I'm seeing a problem building ntpd from a Bitkeeper repo that doesn't occur
with a make dist tarball when using Automake 2.71 and Automake 1.16.90 or
1.16.92 that reliably does not occur with Automake 1.16.5.  To enable easy
reproduction, I've made a tarball of the source from a checkout of NTP
4.2.8p18 available at:

https://davehart.net/ntp/test/ntp-4.2.8p18-vcs.tar.xz

Most of my testing was on FreeBSD 12.1 with stock m4, perl 5.32.1 and
Automake 2.71 using Automake 1.16.5, 1.16.90, and 1.16.92 from tarball
source:

FreeBSD hart.chi1.ntfo.org 12.1-RELEASE_SI FreeBSD 12.1-RELEASE_SI TEMPLATE
 amd64

I've reproduced the failure on Ubuntu 22.04 , gm4 1.4.18, perl 5.34.0, and
both stock Autoconf 2.71 and 2.72 from tarball source, and the success with
distro Automake 1.16.5:

Linux dlh-22-04 6.5.0-1022-azure #23~22.04.1-Ubuntu SMP Thu May  9 17:59:24
UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

While the FreeBSD system is building on NFS with 1s mtime resolution, the
Ubuntu is building on local ext4 with subsecond mtimes.

The error occurs during `autoreconf -v -i` like so

# tar xf ntp-4.2.8p18-vcs.tar.xz
# cd ntp-*-vcs
# ./bootstrap 2>&1 | tee bootstrap.log

I'm attaching logs showing the behavior on Ubuntu with ac 2.72 and am
1.16.5 vs 1.16.92.  The differences up to the failure are below.

The bootstrap script is modified slightly compared to the one in
ntp-4.2.8p18.tar.gz and our bk repos to sort the filenames touched early in
the script to minimize unhelpful diff clutter.


--- bootstrap-am-1.16.5.log 2024-06-29 16:54:55.290591021 +
+++ bootstrap-am-1.16.92.log2024-06-29 17:00:56.366844439 +
@@ -19,162 +19,33 @@
 autoreconf: running: aclocal -I m4
 autoreconf: configure.ac: tracing
 autoreconf: configure.ac: creating directory build-aux
-autoreconf: running: libtoolize --copy
-libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'.
-libtoolize: copying file 'build-aux/ltmain.sh'
-libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
-libtoolize: copying file 'm4/libtool.m4'
-libtoolize: copying file 'm4/ltoptions.m4'
-libtoolize: copying file 'm4/ltsugar.m4'
-libtoolize: copying file 'm4/ltversion.m4'
-libtoolize: copying file 'm4/lt~obsolete.m4'
+autoreconf: configure.ac: not using Libtool
 autoreconf: configure.ac: not using Intltool
 autoreconf: configure.ac: not using Gtkdoc
-autoreconf: running: aclocal -I m4
-autoreconf: running: /usr/local/bin/autoconf
-configure.ac:46: warning: The macro 'AC_PROG_GCC_TRADITIONAL' is obsolete.
-configure.ac:46: You should run autoupdate.
-../lib/autoconf/c.m4:1676: AC_PROG_GCC_TRADITIONAL is expanded from...
-configure.ac:46: the top level
-configure.ac:357: warning: The macro 'AC_HEADER_TIME' is obsolete.
+autoreconf: running: /usr/bin/autoconf
+configure.ac:357: warning: The macro `AC_HEADER_TIME' is obsolete.
 configure.ac:357: You should run autoupdate.
-../lib/autoconf/headers.m4:702: AC_HEADER_TIME is expanded from...
+./lib/autoconf/headers.m4:743: AC_HEADER_TIME is expanded from...
 configure.ac:357: the top level
-configure.ac:805: warning: The macro 'AC_TRY_LINK' is obsolete.
+configure.ac:805: warning: The macro `AC_TRY_LINK' is obsolete.
 configure.ac:805: You should run autoupdate.
-../lib/autoconf/general.m4:2918: AC_TRY_LINK is expanded from...
+./lib/autoconf/general.m4:2920: AC_TRY_LINK is expanded from...
 m4/acx_pthread.m4:86: ACX_PTHREAD is expanded from...
 configure.ac:805: the top level
 configure.ac:1012: warning: AC_OUTPUT should be used without arguments.
 configure.ac:1012: You should run autoupdate.
-autoreconf: running: /usr/local/bin/autoheader
+autoreconf: running: /usr/bin/autoheader
 autoreconf: running: automake --add-missing --copy --no-force
 configure.ac:24: installing 'build-aux/compile'
 configure.ac:26: installing 'build-aux/config.guess'
 configure.ac:26: installing 'build-aux/config.sub'
 configure.ac:14: installing 'build-aux/install-sh'
 configure.ac:14: installing 'build-aux/missing'
+Makefile.am:161: error: Libtool library used but 'LIBTOOL' is undefined
+Makefile.am:161:   The usual way to define 'LIBTOOL' is to add 'LT_INIT'
+Makefile.am:161:   to 'configure.ac' and run 'aclocal' and 'autoconf'
again.
+Makefile.am:161:   If 'LT_INIT' is in 'configure.ac', make sure
+Makefile.am:161:   its definition is in aclocal's search path.
 Makefile.am

Re: automake-1.16.92 released

2024-06-29 Thread Dave Hart
I'm seeing a regression building ntpd on FreeBSD 12.1 amd64 with Autoconf
2.71 between Automake 1.16.5 and 1.16.92.  I haven't filed a bug report yet
as I'm trying to do my part to characterize it well and provide an easy
reproduction.  It may well be a bug in our use of Automake, in which case I
apologize in advance, but I wanted to give a heads-up in case it affects a
decision to release 1.17 before I get a good report together.

The divergence in behavior starts with:

autoreconf: configure.ac: not using Libtool

on 1.16.92 where 1.16.5 invokes libtoolize and aclocal, and later comes
crashing down with:

configure.ac:14: installing 'build-aux/missing'
Makefile.am:161: error: Libtool library used but 'LIBTOOL' is undefined
Makefile.am:161:   The usual way to define 'LIBTOOL' is to add 'LT_INIT'
Makefile.am:161:   to 'configure.ac' and run 'aclocal' and 'autoconf' again.
Makefile.am:161:   If 'LT_INIT' is in 'configure.ac', make sure
Makefile.am:161:   its definition is in aclocal's search path.
Makefile.am: installing 'build-aux/depcomp'
parallel-tests: installing 'build-aux/test-driver'
autoreconf: error: automake failed with exit status: 1

More later,
Dave Hart

On Fri, 21 Jun 2024 at 05:20, Jim Meyering  wrote:

> [Thanks to Karl Berry for doing so much of work again, preparing
>  for this release and even writing most of the following. ]
>
> We are pleased to announce the GNU Automake 1.16.92 test release.
>
> This is a release candidate for the upcoming automake-1.17.
> It mostly attempts to eliminate a delay in configure runs in 1.16.90.
> Please test if you can.
>
> We're particularly interested in bugs or regressions in the actual
> Automake functionality.  Some tests are already known to fail on some
> non-GNU/Linux systems with some configurations, and have open bugs.
> Barring patches, we won't be able to fix all such test failures for this
> release (or, likely, ever).  Nonetheless, we do welcome all bug reports
> (and patches!), in the test suite or otherwise.  For possible
> convenience, here is the open bug list:
>   https://debbugs.gnu.org/cgi/pkgreport.cgi?package=automake
>
> See below for the detailed list of changes since the
> previous version, as summarized by the NEWS file.
>
> Download here:
>
>   https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.gz
> https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.xz
>
> Please report bugs and problems to 
> (instead of replying to this mail),
> and send general comments and feedback to ,
> and patches to .
>
> Thanks to everyone who has reported problems, contributed
> patches, and helped test Automake!
>
> -*-*-*-
>
> For planned incompatibilities in a possible future Automake 2.0 release,
> please see NEWS-2.0 and start following the advice there now.
>
> ~~~
> There have been 25 commits by 7 people in the 18 days since 1.16.90.
>
> Thanks to everyone who has contributed!
> The following people contributed changes to this release:
>
>   Bruno Haible (2)
>   Collin Funk (2)
>   Jim Meyering (1)
>   Karl Berry (15)
>   Mike Frysinger (1)
>   Paul Eggert (3)
>   Yves Orton (1)
>
> ==
>
> Here is the GNU automake home page:
> https://gnu.org/s/automake/
>
> For a summary of changes and contributors, see:
>   https://git.sv.gnu.org/gitweb/?p=automake.git;a=shortlog;h=v1.16.92
> or run this command from a git-cloned automake directory:
>   git shortlog v1.16.90..v1.16.92
>
> Here are the compressed sources:
>   https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.gz   (2.4MB)
>   https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.xz   (1.6MB)
>
> Here are the GPG detached signatures:
>   https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.gz.sig
>   https://alpha.gnu.org/gnu/automake/automake-1.16.92.tar.xz.sig
>
> Use a mirror for higher download bandwidth:
>   https://www.gnu.org/order/ftp.html
>
> Here are the SHA1 and SHA256 checksums:
>
>   bce896594482a4f3e6f627b3fa977ad29cd0c610  automake-1.16.92.tar.gz
>   fENTlskkri7F3iol3Tput2hKD8NUBMbKQLPsWQjShhU=  automake-1.16.92.tar.gz
>   7923d799567f8c44d2ec016b3af69c83c5a07343  automake-1.16.92.tar.xz
>   n2R8mLMlFiAFWT89aOsMVTc+2toXiGkm9P+4Ko0M52I=  automake-1.16.92.tar.xz
>
> Verify the base64 SHA256 checksum with cksum -a sha256 --check
> from coreutils-9.2 or OpenBSD's cksum since 2007.
>
> Use a .sig file to verify that the corresponding file (without the
> .sig suffix) is intact.  First, be sure to download both the .sig file
&g

Re: Detect --disable-dependency-tracking in Makefile.am

2023-09-30 Thread Dave Hart
On Sun, 1 Oct 2023 at 04:45, Jan Engelhardt  wrote:

> >I didn't explain sufficiently.  The submakes I'm talking about are my
> >doing, and I want to conditionalize them on whether
> >--enable-dependency-tracking is used.
> >
> >In for example both ntpq/Makefile.am and ntpd/Makefile.am I'm invoking:
> >
> >(cd ../libntp && make libntp.a)
>
> Yes and if you didn't do sub-makes, then the prerequisites for libntp.a
> would be known (and evaluatable to determine whether to rebuild).
>

Yes.  The only downside that comes to mind is it would no longer be easy to
build a particular subdir despite some other subdir that builds earlier
having an issue you don't care to tackle at the moment.

I welcome anyone who wants to give it a try.  Maintainability is more
important to us than performance, and there may be gremlins in our build
system that make it tricky, I haven't looked into it.  I have my hands full
with more pressing issues than overhauling the build system.
-- 
Cheers,
Dave Hart


Re: Detect --disable-dependency-tracking in Makefile.am

2023-09-30 Thread Dave Hart
On Sun, 1 Oct 2023 at 00:59, Nick Bowler  wrote:

> Suggestion 2) All explicit --enable-foo/--disable-foo arguments to
> a configure script are available in shell variables; in the case of
> --disable-dependency-tracking you can do something like this in
> configure.ac:
>
>   AM_CONDITIONAL([NO_DEPS], [test x"$enable_dependency_tracking" = x"no"])
>
> then in Makefile.am:
>
>   if NO_DEPS
>   # stuff to do when dependency tracking is disabled
>   else
>   # stuff to do otherwise
>   endif
>
> Note that these approaches are different in the case where dependency
> tracking is disabled because it is not supported by the user's tools,
> rather than by explicit request.  This may or may not matter for your
> use case.


Thanks, Nick, Youi've made it easy for me without spelunking and trying to
guess what's documented.  I think I'll go with the documented approach as I
am not concerned about the corner case.  I don't know what tool deficiency
might trigger it, but I'm going to assume for now that it's good enough to
optimize build time only when --disable-depedency-tracking is used.

Cheers,
Dave Hart


Re: Detect --disable-dependency-tracking in Makefile.am

2023-09-30 Thread Dave Hart
On Sat, 30 Sept 2023 at 05:07, Jan Engelhardt  wrote:

> On Saturday 2023-09-30 05:27, Dave Hart wrote:
>
> >I've added code to the ntp.org Makefile.am files to ensure the static
> >utility library libntp.a is up-to-date for each program that uses it, to
> >ensure the build is correct.  When building the project, this adds a bunch
> >of extra submake invocations which slows down the build.  I'd like to omit
> >those when --disable-dependency-tracking is used, to speed one-off builds
> >that are done by end-users and packagers.
>
> submake is pretty much independent of and from dependency tracking.
>
> The general direction contemporary projects take is to just not do submake
> anymore, because it's a slowdown whether or not deptracking is used.
>

I didn't explain sufficiently.  The submakes I'm talking about are my
doing, and I want to conditionalize them on whether
--enable-dependency-tracking is used.

In for example both ntpq/Makefile.am and ntpd/Makefile.am I'm invoking:

(cd ../libntp && make libntp.a)

This is for my convenience as a developer so that any changes I make to
libntp sources trigger its rebuild even if I'm invoking make in just one
subdirectory (say ntpd) rather than the root of the Automake project.

This slows down a one-shot build from source quite a bit, since there are a
half-dozen client directories and there are a lot of rules in
libntp/Makefile.am.  So I want to be sure that when
--disable-dependecy-tracking has been passed to configure, as is often done
by packagers, we omit those as the order of Makefile.am subdirs ensures
libntp.a is built before consumers.

Thanks for your response,
Dave Hart


Detect --disable-dependency-tracking in Makefile.am

2023-09-29 Thread Dave Hart
I've added code to the ntp.org Makefile.am files to ensure the static
utility library libntp.a is up-to-date for each program that uses it, to
ensure the build is correct.  When building the project, this adds a bunch
of extra submake invocations which slows down the build.  I'd like to omit
those when --disable-dependency-tracking is used, to speed one-off builds
that are done by end-users and packagers.

I'm guessing someone has trod this ground before.  I'd appreciate pointers
to examples of how others have detected --disable-dependency-tracking to
change their build behavior.

Thanks in advance.
Dave Hart


Re: Automake 1.11.2 released

2011-12-26 Thread Dave Hart
On Tue, Dec 27, 2011 at 02:18, Bob Friesenhahn
 wrote:
> From a layman's standpoint, I noticed right away that lzip is small and in
> the spirit of gzip and bzip2 whereas xz is huge (10X more source code last
> time I checked) and has many more lines of configure script than lzip has of
> source code in its entirety.

I've never used lzip or xz.  Hell, so far I've only created a handful
of .bz2 files myself (one-off convenience stuff).  I do not take any
position on inclusion of xz or lzip support in Automake's make dist.

I do question the usefulness of comparing lines counts of
Autoconf-generated configure script and source code.  If you're trying
to elicit support from maintainers of oft-maligned yet widely-used
autotools, you've chosen an interesting way of warming up your
audience.

> However, lzip is written in portable C++ (rather than C) and does not
> configure via autotools (because it uses only portable C++) so it is at a
> political disadvantage.

I'm not sure how using only portable C++ eliminates autotools utility.
 I can see that using C++ means never having to care about some
esoteric ancient history relevant only to older and/or less mainstream
systems and compilers, but it seems to me much the same can be said of
requiring C99?

I've never used SCons either, though I've been paying attention to one
project that recently switched from Autotools and is generating a
steady stream of "doesn't build on X brand widely-deployed system"
months later.  My impression is the simplifications SCons brought have
come at a substantial portability price, though I'm sure with enough
effort and time the squeakiest platforms with issues will get some
grease, I suspect the project in question will not ever be as portable
with SCons as it was with the hated Autotools.

I appreciate that many Autoconf + Automake users can't be bothered to
understand how Autotools works or what m4 is, and yet manage to
copy/paste and blindly experiment enough to scrape along.  I hope more
of them switch to SCons and other Autotools alternatives, which may
well be better for their project, but in any case would mean less
whinging Autotools-dependent projects blaming tools which they can't
be bothered to learn to use for their shortcomings.  m4 is so
inside-out of the way most programmers think, and Autotools for
simpler cases hide that entirely behind a blissfully simplified facade
of m4 macros invocation that many can use without understanding, until
things get slightly less boilerplate.  Then out come the
pitchfork-wielding masses blaming their unknowable black boxes rather
than take the time to hump the Autoconf and Automake learning curve.

Grinchfully,
Dave Hart



Re: PCH support

2011-12-23 Thread Dave Hart
On Fri, Dec 23, 2011 at 20:51, Warren Young  wrote:
> The only important thing to know is that it's a way to make the compiler
> dump its parse tree to disk during compilation so that it can simply reload
> that state from disk instead of rebuilding it from scratch for each module
> it builds.
>
> You might think of PCH as a similar optimization to that of a bytecode
> compiler for a dynamic language: it doesn't get you native code, like you
> can get with a traditional static language, but you still get a speed
> benefit by avoiding reparsing.
>
> PCH is most valuable with headers like STL which are commonly used across
> the program and are expensive to parse and reparse and re-reparse.

True, but most C/C++ #includes orders of magnitudes more lines than
they contain themselves, so assuming the source code is rearranged to
have a "precomp.h" containing the bulk of #includes, the compile will
be notably faster.

> I think the idea is that if autoconf detects that PCH is available and
> automake generates the correct compiler commands to use it, it will be there
> "for free" to any user of the autotools.  Builds just get magically faster.

Given the source changes needed to leverage PCH, I suspect it'll take
a bit of maintainer involvement to enable useful PCH support in each
package.

> There's a monkey wrench, in that PCH doesn't work well if you don't organize
> your header files to take advantage of it.  Say you have a program with 20
> modules, and none of them have any commonality in their #include lines.  PCH
> might make such a build *slower*.  PCH gets its biggest benefit when you can
> make the includes as similar as possible across modules, at least up to a
> point.
>
> Visual C++ avoids this trap by generating a header file for the project
> which you're supposed to #include in every module, and in which goes
> #includes for the most commonly used things.  (stdio.h, windows.h...) The
> project is configured to only generate PCH output for that one header, so
> there is none of the cache thrashing that happens in my 20-modules example.
>
> I'm sure you care nothing for Visual C++, but most of the people begging for
> PCH support are probably coming from this world.

Another monkey wrench is gcc and Visual C++ have different models for
how PCH is implemented.  Support in Automake would ideally target both
by finding a compatible subset.  I'm sure there are existing
open-source models that demonstrate how to use both gcc and VC
precompiled headers.  As I recall, gcc support is a bit more generic
but involves a separate PCH invocation to "compile" the headers, while
VC++ requires precomp.h be the first item included in each
participating file but doesn't require a separate compiler invocation
-- the first one that can use the precomp.pch generates it.

The compile-time savings can be relatively huge.  Support in Automake
would be lovely and I'd be happy to help test any patches.

Cheers,
Dave Hart



Re: minor error and a question

2011-12-05 Thread Dave Hart
On Mon, Dec 5, 2011 at 17:33, Joakim Tjernlund
 wrote:
> One thing that would go a long way is if one could do something like this:
>  make install eq_ss
> and have make just install what is in the eq_ss directory

As long as subdirectories of eq_ss are intended to be installed as well:

cd eq_ss
make install

Cheers,
Dave Hart



Re: Still help needed at creating Makefile.am

2011-11-26 Thread Dave Hart
On Sat, Nov 26, 2011 at 19:05, Stefan  wrote:
> What fails is, that on make distcheck it looks like the scripts are not 
> produced. If I do a make before, everything works.
>
> noinst_SCRIPTS=
> if !CRB_NO_RSCRIPTS
>  noinst_SCRIPTS += sample_distris center 1A center_1A
> endif
>
> CLEANFILES = $(noinst_SCRIPTS) $(noinst_PLOTS)
>
> EXTRA_DIST = $(noinst_PLOTS) $(noinst_DATA) $(noinst_R)

Perhaps try adding $(noinst_SCRIPTS) to EXTRA_DIST?  Just a stab in
the dark without looking at the manual...

Cheers,
Dave Hart



Re: [gnu-prog-discuss] Could automake-generated Makefiles required GNU make?

2011-11-23 Thread Dave Hart
On Wed, Nov 23, 2011 at 14:39, Warren Young  wrote:
> Yes, and we've bought that last 0.001% of compatibility with bigger, slower,
> and harder to read generated Makefiles and configure scripts. TANSTAAFL.  If
> the price to lose some bloat, gain some speed, and increase the clarity of
> these files is that I have to install GNU make on the 0.001% of systems
> where it isn't installed already, that seems a fair trade.

It's unclear whether Autoconf + Automire will be able to detect and
use GNU make which is installed but is not named 'make'.  As I said,
the basic tarball user's build instructions are "configure && make".
I realize there can be automation to use, for example, gmake via
Makefile vs. Makefile.gnu, and if that sort of change is in place
before the first requirement for GNU make, the pain is reduced.  In
the realm of people building NTP from source, far more than 1 in
100,000 seem to be using systems where 'make' is available but is not
GNU make.  Even if we define the requirement as $(MAKE-make) is GNU
make or gmake is GNU make, more than 1 in 100,000 that I've dealt with
using NTP tarballs are using systems where GNU is not built-in, and
would be negatively impacted by the additional prerequisite required
to build NTP.

However, as long as this experimentation with requiring GNU make is
done in an Automire fork and not Automake, I have no qualms greater
than concern for maintainer attention to Automake fading over time in
favor of Automire.

Cheers,
Dave Hart



Re: [gnu-prog-discuss] Could automake-generated Makefiles required GNU make?

2011-11-23 Thread Dave Hart
On Tue, Nov 22, 2011 at 21:45, Harlan Stenn  wrote:
> NTP pretty much runs everywhere.  It was  not that long ago that NTP
> dropped support for K&R C compilers, and at that point required ANSI C
> (and I'm not sure if it's C89/C90 or C99, but nobody has complained so I
> haven't looked harder).

NTP requires only ANSI C, also known as ISO C, C89, or C90.  It does
not require any support of later standards, though, for example, we do
rely on C99 snprintf semantics, we provide a replacement version for
use with C runtimes offering ANSI C snprintf.

>  Now that commodity x86 boxes are so "pervasive"
> there is no longer a compelling reason to support things like ancient
> sparc, mips, hp, ... boxes.

I agree the reason becomes less compelling as more capable systems
become more commonplace, but I do not agree ancient RISC boxes are no
longer an interesting target for current NTP builds.  AGC is deploying
a new PPS-synced ntpd server on his network using an ancient SPARC box
running current NetBSD.  I would be disappointed if I were telling him
he's limited to using older NTP releases because we no longer support
NTP on insufficiently common and modern hardware.  A NTP consumer at
Software AG regularly files bugs against NTP for build compatibility
breaks with ancient Sun hardware and software.  I value portability as
a concept extremely highly, and in practice I want NTP tarballs to be
as portable as we can make them, with a very short list of
prerequisites.  Our prerequisites for tarball users right now are an
ANSI C compiler, Automake-supported make (which means nearly any make
in the wild), and Autoconf-compatible shell.  Bumping those
requirements to include GNU make would reduce the package's
portability and decrease the percentage of users who try building the
tarball which eventually succeed.

> My goal is to make sure that people can easily build NTP.
>
> Toward that end I want to minimize the number of extra tools that might
> need to be installed.
>

Agreed, and easily means without additional prerequisites to me.

> I would not want to require GCC, for example.
>
> We don't require perl, but if it is there we use it.

Not from a tarball with unmodified source.  Yes, if ntpdc headers are
changed we rely on perl, and to use flock-build we rely on perl, but
neither comes into play for an end-user simply building NTP from a
tarball.

> We do not require yacc-lex/bison-flex or GNU autogen for building.  But
> if a developer wants to changes certain files, those tools will be
> needed.
>
> If there is a compelling reason to "upgrade" from current automake we'll
> do it.
>
> Some things I'd like to see would include easy non-recursive Makefiles
> (that would let folks easily build any list of given particular
> programs), and a means to integrate NTP into a larger build environment.

In other words, Harlan has indicated to me a non-recursive Makefile
sounds good in eliminating build system bugs due to each directory's
Makefile having a limited view of the whole, and I agree, but he
doesn't like the idea of giving up the ability to "make" in, say, the
ntpd subdirectory and have only ntpd and its prerequisite directories
components build, and I agree again.  If anyone knows of examples of
non-recursive Makefile implementations that manage to preserve the
recursive make property of being able to make in a subdir to make a
subset, please share so we can learn from their pioneering.

Cheers,
Dave Hart



Re: [gnu-prog-discuss] Could automake-generated Makefiles required GNU make?

2011-11-22 Thread Dave Hart
On Wed, Nov 23, 2011 at 02:24, Warren Young  wrote:
> Google just found this for me in the NetBSD docs: "Packages which use GNU
> Automake will almost certainly require GNU Make."  I'm guessing that was
> written by a NetBSD fan from experience, rather than slipped in by some
> pro-GNU-anti-BSD saboteur.  If so, fait accompli already.

Not so fast.  The advice is not bad but it's howto-style advice based
on the fact that many packages built using Automake are only tested by
their maintainers and vocal users with 'make' being GNU make.  It does
not state and shouldn't be read to imply that GNU Automake requires
GNU make of the tarball user.  When testing NTP build compatibility, I
use the 'make' the system provides, even if gmake is available,
because I want to know of portability problems to other makes and
because the instructions are "configure && make" not "configure &&
(gmake || make)".  Unfortunately, I know others say "I prefer GNU
make" and translate that into "I test NTP build compatibility only
after ensuring make/$MAKE points to GNU make on every one".  The
latter practice hides portability problems that will arise for users
unaware they are expected to similarly prefer GNU make and prearrange
to ensure make or $MAKE leads to it.

> Besides, why should BSD purity get to hold back the Autotools?  If the
> distrowatch.com stats are to be believed, *BSD's market share is under 1%
> that of Linux, which itself is only about 1% of the overall market of
> machines the Autotools can reasonably be used on.  Further reduce that by
> the percentage of BSD boxes that have not yet had gmake installed after
> installation; 10% maybe?  We're probably talking about a set of boxes
> comprising < 0.001% of the market.  (10% of 1% of 1%.)

If Autotools are primarily intended to support those using GNU/Linux
systems and portability is not a goal, your argument that GNU has won
and BSD compatibility of free software is no longer worthwhile makes
sense.  As long as build system portability is a goal, the numbers
don't really matter until no one using Autotools has any customers
they want to support using non-GNU/Linux systems.

As Ralf suggests, Automake has made it this far without requiring more
than portable make.  Changing to producing tarballs which require the
end user provide GNU make before they will build means increased
end-user pain on non-GNU/Linux systems to reduce Automake maintainer
pain and pain of developers creating packages with Automake.  There
are tradeoffs and it brings up the myriad ways Automake is used in the
wild, not always in purely free software scenarios and not only by
developers.  Automake-produced makefiles are used by end users
building from source with no interest in developing software.

Cheers,
Dave Hart



Re: Could automake-generated Makefiles required GNU make? (was: Re: [gnu-prog-discuss] portability)

2011-11-22 Thread Dave Hart
n their way to being software developers simply because
they are using a GNU-like system.  Similarly it is not true that
nearly all such 'newbies' in the last few years are using Linux or Mac
OS.  In the last few years, people who have reason to build ntpd from
source whom I've encountered have been as likely to be using
proprietary or BSD OS than Linux or Mac OS.

In my world, it is not only old hands stuck in their ways and
well-versed in BSD make vs. GNU make who are users of BSD and
proprietary OSes.  Sometimes someone else chose the OS they use
(university administrator, company IT guy).  Sometimes they have
license-related influence on that choice, where to be ethical they are
forced to avoid building on GPL technologies and seek BSD-licensed
alternatives.  OS choices aren't always a matter of popularity or
preference.

Automake is GNU software that, like several other pieces of GNU build
infrastructure, has enhanced portability of a lot of software.  As
pointed out previously, before decreasing portability of packages
relying on Automake, we should consider the perspective of end users
of all sorts of Automake-using packages, weighing the benefits to them
against the cost of requiring GNU make before "configure && make" (or
gmake...)

I like RMS's idea of switching one or two GNU packages to require GNU
make to test the waters.  Obviously, GNU make itself needs to require
only portable make to enable bootstrapping.  I agree the experiment
shouldn't be done first with Automake, as that would have implications
for all packages uses Automake.  Doing it with an Automake fork called
Automire wouldn't be such a problem, but Stefano probably doesn't find
forking appealing for a number of sound reasons.

Thanks for the choice to support use of Automake by non-GPL packages,
and for hearing the concerns of a maintainer of such a package.

Cheers,
Dave Hart



Re: [gnu-prog-discuss] Could automake-generated Makefiles required GNU make?

2011-11-22 Thread Dave Hart
At the risk of repeating myself from the last time this question came
up, let me selfishly say as a NTP maintainer that I do not look
forward to NTP configure failing with a message indicating GNU make is
required and could not be located.  I have no appreciation for how
much simpler and easier to maintain Automake might become with a shift
from targetting portable make to requiring GNU make.  I've never
maintained Makefile or Makefile.am files in a GNU-make-only project.
I do find it is sometimes easier to track down problems affecting both
GNU make and more traditional implementations using a traditional make
as the verbose debug output of GNU make is so much longer due to more
implicit rules.

It would be my inclination to stay with older Automake as long as
feasible if newer Automake drops support for traditional make.  Harlan
Stenn, who initially converted the NTP code to use Autoconf and
Automake, likely has a different perspective which might well matter
more than mine.

Cheers,
Dave Hart



sourceware.org Automake mailing list archive is a dead-end

2011-07-27 Thread Dave Hart
The 3rd hit on:

http://www.google.com/search?q=Automake+mailing+list+archive

is RedHat's:

http://sourceware.org/ml/automake/

This is a dead-end, though, as it has nearly no recent messages
archived, and those I could find were all Western Union phishing
attempts.  I see at http://sources.redhat.com/automake/ that only
automake-cvs@ and automake-prs@ mailing lists are now hosted by
RedHat.  It would be spiffy if the dead-end sourceware.org page would
redirect or point to the real archive at
http://lists.gnu.org/archive/html/automake/.  If it has older archives
not present at lists.gnu.org, it would be nice if the responsible
party would shut down the ongoing archiving of the Western Union
stuff, and possibly trim the index to only show such older, legitimate
archives.

Prefixing "GNU " to the query gets more useful results, but hey, my
fingers are lazy.  I welcome correction if my facts are confused.  I'm
certainly unclear on the differences between sources.redhat.com and
sourceware.org.

Cheers,
Dave Hart



Re: Make looks for the wrong file

2011-07-27 Thread Dave Hart
On Tue, Jul 26, 2011 at 13:28 UTC, GAVA Cédric  wrote:
> I moved the DedicatedSoftware.c file into src/state_machine directory, and 
> changed my Makefile.am to the following :
> bridge_SOURCES  = src/state_machine/DedicatedSoftware.c
>        $> make: *** No rule to make target .../src/DedicatedSoftware.c', 
> needed by `bridge-DedicatedSoftware.o'.  Stop.
> What do I miss ?

Your .deps directory has a makefile fragment referring to the original
path to that .c file, which make is including and hence barfing.  I
think re-running automake is all that's needed, which should have
happened automatically.

The ntp.org reference implementation of NTP includes a "deps-ver"
mechanism that allows a developer renaming or deleting files in a way
that will trigger this sort of .deps-related build break for those
tracking the source and rebuilding using an existing build directory
to bump the "deps-ver", which triggers removal of stale .deps/
contents, so they will be rebuilt correctly.  The sentinel file
/deps-ver (whose contents are changed, or "bumped" to trigger the
cleanup).  depsver.mf contains the logic and must be included in the
Makefile.am files using Automake's automatic dependency tracking.

Cheers,
Dave Hart



Re: Modify $PATH for all subdirectories

2011-04-07 Thread Dave Hart
On Wed, Apr 6, 2011 at 20:45 UTC, Too, Justin A.  wrote:
> Or is there a better way to accomplish this?

Instead of changing $PATH, you could refer to the executable relative
to $(top_builddir) in your check-local targets:

check-local: check-something
check-something:
$(top_builddir)/scripts/test/install/bin/something --or --other

That may be more straightforward than portably modifying the
environment for submakes.

Cheers,
Dave Hart



Re: A way to specify a host CC during cross compile?

2011-04-04 Thread Dave Hart
On Mon, Apr 4, 2011 at 22:15 PM, Martin Hicks  wrote:
> Is there a way to specify a different compiler for compile-time helper
> programs that are used during the build of a package?

I think the short answer is no.  I have some source files in a package
which are built by running a C program, which would be a use case for
$HOSTCC-type functionality.  I use a cross-compiling AM_CONDITIONAL to
disable the rules for updating those generated sources, and I
distribute them.  The result is changes to the sources of the
generated sources must be built in a non-cross-compile environment,
then the up-to-date generated sources can be transported (such as via
make dist, or committing to SCM) to the cross-build environment.

Cheers,
Dave Hart



Re: not breaking "make" after m4 macros and source files changed

2011-04-03 Thread Dave Hart
[Resending, copy sent at 09:09 UTC from h...@ntp.org has not arrived
in automake@ archive, though it is in bug-gnulib@ archive.]

On Sun, Apr 3, 2011 at 08:22 UTC, Ralf Wildenhues wrote:
> * Bruno Haible wrote on Sat, Apr 02, 2011 at 07:08:14PM CEST:
>>   - A removed .h file.
>> Before: configure.ac depends on m4/macros.m4 that AC_SUBSTs STDIO_H to 
>> stdio.h.
>> lib/stdio.h is generated through lib/Makefile if STDIO_H is 
>> non-empty.
>> lib/foo.c includes stdio.h and needs to be compiled for 'all'.
>> After:  configure.ac depends on m4/macros.m4 that AC_SUBSTs STDIO_H to 
>> empty.
>> lib/stdio.h is generated through lib/Makefile if STDIO_H is 
>> non-empty.
>> lib/foo.c includes stdio.h and needs to be compiled for 'all'.
>> Here you need to check that after "make", lib/stdio.h is gone.
>
> This is a case that doesn't currently work, as nothing causes
> lib/stdio.h to be removed.  It is not yet clear to me where to
> stick those semantics sanely.

Removing or renaming a header can break the build for people tracking
the source using a version control system who reuse a build directory
from before the header removal/renaming, as I recall.  The problem I
saw was make failing because a .Po dependency tracking include file
was not present.  NTP has a manual mechanism to allow this scenario to
work:  Each Makefile.am includes depsver.mf which drops a version
stamp in each .deps directory if there is not one already, and
compares the version if there is a version stamp already present.
When the problem is recognized or anticipated, the .deps version (in
file deps-ver) is bumped.  When depsver.mf detects a version mismatch,
it forces regeneration of the .deps directory.  The implementation is
limited by being a make (not automake) fragment to the top-level and
one level of subdirs, but that's good enough for us.

It would be slick if Automake provided something similar without the
depth restriction.  Bonus points will be given for not requiring the
developer renaming/removing headers to manually trigger the mechanism.
 ;)

Below is our depsver.mf fragment [1], with comments that may well be
more accurate than my recollection above.

Cheers,
Dave Hart

[1] depsver.mf:

$(DEPDIR)/deps-ver: $(top_srcdir)/deps-ver
   @[ -f $@ ] ||   \
   cp $(top_srcdir)/deps-ver $@
   @[ -w $@ ] ||   \
   chmod ug+w $@
   @cmp $(top_srcdir)/deps-ver $@ > /dev/null || ( \
   $(MAKE) clean &&\
   echo -n "Prior $(subdir)/$(DEPDIR) version " && \
   cat $@ &&   \
   rm -rf $(DEPDIR) && \
   mkdir $(DEPDIR) &&  \
   case "$(top_builddir)" in   \
.) \
   ./config.status Makefile depfiles   \
   ;;  \
..)\
   cd .. &&\
   ./config.status $(subdir)/Makefile depfiles &&  \
   cd $(subdir)\
   ;;  \
*) \
   echo 'Fatal: depsver.mf Automake fragment limited'  \
'to immediate subdirectories.' &&  \
   echo "top_builddir: $(top_builddir)" && \
   echo "subdir:   $(subdir)" &&   \
   exit 1  \
   ;;  \
   esac && \
   echo -n "Cleaned $(subdir)/$(DEPDIR) version " &&   \
   cat $(top_srcdir)/deps-ver  \
   )
   cp $(top_srcdir)/deps-ver $@

.deps-ver: $(top_srcdir)/deps-ver
   @[ ! -d $(DEPDIR) ] || $(MAKE) $(DEPDIR)/deps-ver
   @touch $@

BUILT_SOURCES += .deps-ver
CLEANFILES += .deps-ver

#
# depsver.mfincluded in Makefile.am for directories with .deps
#
# When building in the same directory with sources that change over
# ti

Re: not breaking "make" after m4 macros and source files changed

2011-04-03 Thread Dave Hart
On Sun, Apr 3, 2011 at 08:22 UTC, Ralf Wildenhues wrote:
> * Bruno Haible wrote on Sat, Apr 02, 2011 at 07:08:14PM CEST:
>>   - A removed .h file.
>>     Before: configure.ac depends on m4/macros.m4 that AC_SUBSTs STDIO_H to 
>> stdio.h.
>>             lib/stdio.h is generated through lib/Makefile if STDIO_H is 
>> non-empty.
>>             lib/foo.c includes stdio.h and needs to be compiled for 'all'.
>>     After:  configure.ac depends on m4/macros.m4 that AC_SUBSTs STDIO_H to 
>> empty.
>>             lib/stdio.h is generated through lib/Makefile if STDIO_H is 
>> non-empty.
>>             lib/foo.c includes stdio.h and needs to be compiled for 'all'.
>>     Here you need to check that after "make", lib/stdio.h is gone.
>
> This is a case that doesn't currently work, as nothing causes
> lib/stdio.h to be removed.  It is not yet clear to me where to
> stick those semantics sanely.

Removing or renaming a header can break the build for people tracking
the source using a version control system who reuse a build directory
from before the header removal/renaming, as I recall.  The problem I
saw was make failing because a .Po dependency tracking include file
was not present.  NTP has a manual mechanism to allow this scenario to
work:  Each Makefile.am includes depsver.mf which drops a version
stamp in each .deps directory if there is not one already, and
compares the version if there is a version stamp already present.
When the problem is recognized or anticipated, the .deps version (in
file deps-ver) is bumped.  When depsver.mf detects a version mismatch,
it forces regeneration of the .deps directory.  The implementation is
limited by being a make (not automake) fragment to the top-level and
one level of subdirs, but that's good enough for us.

It would be slick if Automake provided something similar without the
depth restriction.  Bonus points will be given for not requiring the
developer renaming/removing headers to manually trigger the mechanism.
 ;)

Below is our depsver.mf fragment [1], with comments that may well be
more accurate than my recollection above.

Cheers,
Dave Hart

[1] depsver.mf:

$(DEPDIR)/deps-ver: $(top_srcdir)/deps-ver
@[ -f $@ ] ||   \
cp $(top_srcdir)/deps-ver $@
@[ -w $@ ] ||   \
chmod ug+w $@
@cmp $(top_srcdir)/deps-ver $@ > /dev/null || ( \
$(MAKE) clean &&\
echo -n "Prior $(subdir)/$(DEPDIR) version " && \
cat $@ &&   \
rm -rf $(DEPDIR) && \
mkdir $(DEPDIR) &&  \
case "$(top_builddir)" in   \
 .) \
./config.status Makefile depfiles   \
;;  \
 ..)\
cd .. &&\
./config.status $(subdir)/Makefile depfiles &&  \
cd $(subdir)\
;;  \
 *) \
echo 'Fatal: depsver.mf Automake fragment limited'  \
 'to immediate subdirectories.' &&  \
echo "top_builddir: $(top_builddir)" && \
echo "subdir:   $(subdir)" &&   \
exit 1  \
;;  \
esac && \
echo -n "Cleaned $(subdir)/$(DEPDIR) version " &&   \
cat $(top_srcdir)/deps-ver  \
)
cp $(top_srcdir)/deps-ver $@

.deps-ver: $(top_srcdir)/deps-ver
@[ ! -d $(DEPDIR) ] || $(MAKE) $(DEPDIR)/deps-ver
@touch $@

BUILT_SOURCES += .deps-ver
CLEANFILES += .deps-ver

#
# depsver.mfincluded in Makefile.am for directories with .deps
#
# When building in the same directory with sources that change over
# time, such as when tracking using bk, the .deps files can become
# stale with respect to moved, del

Re: Removing Mac OS X resource forks from distribution tarballs

2011-03-30 Thread Dave Hart
Hi Ralf,
On Thu, Mar 31, 2011 at 05:28 UTC, Ralf Wildenhues wrote:
> Hello Dave,
>
> * Dave Hart wrote on Wed, Mar 30, 2011 at 11:06:02PM CEST:
>> Right, one approach would be to run a dist-hook which strips all
>> resource forks from distdir files.
>>
>> An equally effective approach which Automake could potentially handle
>> generally on Darwin would be to instruct tar to ignore resource forks
>> when creating the tar file.
>
> Please show how that would work.

I don't use Macs, but web searching suggests an answer.  Apparently,
there is no command-line switch to disable generation of ._
psuedofiles in tarballs, but there is an undocumented environment
variable that so modifies tar behavior,
COPY_EXTENDED_ATTRIBUTES_DISABLE=true. For reasons unknown outside the
bowels of One Infinite Loop, this was renamed to the
far-less-appropriate COPYFILE_DISABLE as of the Leopard release of Mac
OS X.  [1]

When the build triplet matches *-apple-darwin*, Automake could include
in the distdir target code to set both of the undocumented environment
variables to true.  That presumes no Automake clients find the
extended attributes and resource forks in tarballs valuable.

Cheers,
Dave Hart

[1] http://norman.walsh.name/2008/02/22/tar



Re: Removing Mac OS X resource forks from distribution tarballs

2011-03-30 Thread Dave Hart
On Wed, Mar 30, 2011 at 20:55 UTC, Adam Mercer  wrote:
> I thought of something like that but the problem is that these files
> only show up on file systems that don't support OS X resource forks,
> such as ext3, and as I build the tarballs on OS X they won't show up
> as individual. I've been looking for some code to identify which files
> have resource forks but the examples don't seem to work... I'll ask on
> a more specific OSX development list.

Right, one approach would be to run a dist-hook which strips all
resource forks from distdir files.

An equally effective approach which Automake could potentially handle
generally on Darwin would be to instruct tar to ignore resource forks
when creating the tar file.  Or, if Darwin's tar can't do that but can
extract resource forks as if your filesystem didn't support them,
Automake could round-trip to remove them on Darwin (tar, extract with
resource forks named ._filename, find | xargs to rm the ._* files, tar
again).

Cheers,
Dave Hart



Re: PKG_CHECK_MODULES on system without pkg-config installed?

2011-03-10 Thread Dave Hart
On Thu, Mar 10, 2011 at 1:53 PM, Jef Driesen  wrote:
> You don't have to convince me of the advantages of using pkg-config.
> I want to use pkg-config for exactly the same reasons as you explain.
> But when I tried to build my project on a system without pkg-config
> installed it failed. Proceeding without the feature (that would be enabled
> through the pkg-config checks) would be perfectly acceptable, except
> that autoconf fails on the PKG_* macros and aborts. So I can't build
> anything at all and that's the main problem.

I actually didn't use the PKG_ macros at all so I didn't run into this
problem -- configure.ac searches for pkg-config using AC_PATH_TOOL and
invokes it.

However, I like the idea of using PKG_* m4 macros instead, and I'm
disappointed to hear of your snag.  Do not despair, you can use the
macros on systems that have them (and not break on systems without) by
using m4_ifdef to conditionalize the use.  For example, this is how
ntp-dev enables Automake's silent rules generation without requiring
automake 1.11, which was the first release to provide AM_SILENT_RULES:

m4_ifdef(
[AM_SILENT_RULES],
[AM_SILENT_RULES([yes])]
)

Cheers,
Dave Hart



Re: PKG_CHECK_MODULES on system without pkg-config installed?

2011-03-10 Thread Dave Hart
On Thu, Mar 10, 2011 at 10:02 UTC, Jef Driesen  wrote:
> I'm aware of the advantage of using pkg-config. I even supply the *.pc
> files for my own projects. But the point is that if I want to support systems
> that don't have pkg-config installed (like Mac OS X in my case), I have to
> provide a fallback with manual detection anyway.

But you are not everyone.  In some cases, the use of the library is
optional, or there is a bundled copy of the library used when an
installed one isn't found via pkg-config.  It's reasonable to provide
no fallback probing for an installed headers and libs -- you just
proceed without or substitute.

> So why not skip pkg-config entirely?

Personally, I want to never write any more manual detection code when
pkg-config will do the job.  I'm also a big fan of the way
PKG_CONFIG_PATH lets me customize my choices as a user of a system
which is shared with and administered by others.  The staff likes a 6
year old version of openssl with backported patches as of three years
ago, which triggers compiler warnings that modern openssl headers
don't?  I love that I can install a newer openssl in my homedir and
arrange my PKG_CONFIG_PATH to find my openssl before the decrepit one,
for packages that respect any openssl.pc found via pkg-config.  I
could get by without pkg-config, but it would mean specifying extra
configure options repeatedly to point to my local openssl.  pkg-config
lets me enshrine that preference once and get on with more important
things than remember which combination of overrides I need at
configure time.

Cheers,
Dave Hart



Re: Conditional target

2011-01-26 Thread Dave Hart
On Thu, Jan 27, 2011 at 03:40 UTC, Sergio Belkin  wrote:
> I have the target test, being:
>
> tests = $(EXTRA_PROGRAMS)
>
> I've added:
>
> if forcestatic
> tests: LIBS += $(STATIC_RESOLV)
> end if

How about:

mytestprog_LDADD =
if forcestatic
mytestprog_LDADD += $(STATIC_RESOLV)
endif
^ one word

If you have more than the hypothetical mytestprog in EXTRA_PROGRAMS
which require the additional library, use simply LDADD instead of
mytestprog_LDADD.

Cheers,
Dave Hart



Re: bug reports, and lack of feedback (was: make -j1 fails)

2011-01-18 Thread Dave Hart
On Tue, Jan 18, 2011 at 19:30 UTC, Ralf Wildenhues wrote:
> * Dave Hart wrote on Tue, Jan 18, 2011 at 09:49:02AM CET:
>> While you're waiting for that,
>> perhaps you could pursue the problem I
>> did take the time to provide a reduced test case for in November:
>>
>> http://lists.gnu.org/archive/html/automake/2010-11/msg00135.html
>
>> Note that this issue is no longer a problem for NTP -- autogen's
>> libopts now provides LIBOPTS_CHECK_NOBUILD, which sidesteps the need
>> to conditionalize AC_CONFIG_FILES([libopts/Makefile]), and works
>> correctly on Automake 1.10, which doesn't support AM_COND_IF
>> conditionalization of AC_CONFIG_FILES.
>
> Good to know.
>
>> I am annoyed no one has taken the time to follow up after I took the
>> time to produce a reduced test case illustrating the automake
>> misbehavior, and each time I see a request for a reduced repro, I
>> wonder what I might have done wrong in anticipating the request and
>> providing the reduced test case in the initial report.
>
> I looked at it for maybe half an hour back then, and didn't see an easy
> way to fix it.  Sorry.  I should maybe have followed up to let you know.
> You didn't do anything wrong, otherwise I would eventually have asked.
> But anyway we should've thanked you for the report, so please allow me
> to thank you now for the nice and well-written bug report!

Thank you for the update.  Knowing that you were able to understand my
less-than-succinct report, and to recognize the problem, satisfies
most of my concerns.

> Generally, there are more bug reports than there are people looking at
> them, analyzing and fixing them.  As is the case in so many free
> software projects.  If you are dissatisfied with that, and you have
> resources, you are very welcome to help out.

I understand.

> Other than that, I guess I
> should encourage using our new-ish debbugs bug tracker (just write to
> bug-automake to open a new PR) to be a little more sure issues don't get
> lost.
>
> I typically try to make sure rather quickly that a report is complete,
> so that when someone eventually gets to it, they have a chance to do
> something productive with it even if the original reporter has gone off
> to some other pasture in the meantime.
>
> Since you now have a workaround for your bug, I hope you understand that
> the priority of it is rather low.  Sorry again, but that's how bug
> economics work, necessarily.

I do understand the priority is low for practical reasons.  From an
engineering standpoint, I remain unsatisfied that Automake claims to
allow conditionalizing AC_CONFIG_FILES in AM_COND_IF but flubs this
instance.  I will open a PR, thanks for pointing out what should have
been obvious to me as I knew of the debbugs tracker for automake.

Thanks for your time,
Dave Hart



Re: make -j1 fails

2011-01-18 Thread Dave Hart
On Fri, Jan 14, 2011 at 6:27 PM, Ralf Wildenhues  wrote:
> * Pippijn van Steenhoven wrote on Fri, Jan 14, 2011 at 09:38:36AM CET:
>> On Thu, Jan 13, 2011 at 07:22:20PM +0100, Ralf Wildenhues wrote:
>> > If the failure persists, please post short configure.ac and
>> > Makefile.am which expose the problem for you. You can start with
>> > what I show below, and adjust that if it doesn't expose it.
>>
>> The code didn't trigger the bug and I couldn't easily reproduce it by
>> modifying it. It involves considerable effort to modify my existing
>> project and upload it to the FreeBSD build machine and I don't have time
>> to do that, now.
>
> Understood.  This sounds like a FreeBSD make bug, but I'm not sure.
> Can you make your project available for us to try and reproduce the bug
> (I have access to a couple of FreeBSD systems)?  If not, then I'm afraid
> I'll not be able to pursue this further before seeing a reduced version.

While you're waiting for that, perhaps you could pursue the problem I
did take the time to provide a reduced test case for in November:

http://lists.gnu.org/archive/html/automake/2010-11/msg00135.html

Note that this issue is no longer a problem for NTP -- autogen's
libopts now provides LIBOPTS_CHECK_NOBUILD, which sidesteps the need
to conditionalize AC_CONFIG_FILES([libopts/Makefile]), and works
correctly on Automake 1.10, which doesn't support AM_COND_IF
conditionalization of AC_CONFIG_FILES.

I am annoyed no one has taken the time to follow up after I took the
time to produce a reduced test case illustrating the automake
misbehavior, and each time I see a request for a reduced repro, I
wonder what I might have done wrong in anticipating the request and
providing the reduced test case in the initial report.

Cheers,
Dave Hart



Re: [CRAZY PROPOSAL] Automake should support only GNU make

2011-01-16 Thread Dave Hart
>From my perspective as a maintainer of the reference implementation of
NTP, I value make portability and would be disappointed to see
Automake become GNU-make-specific.  We strive to make our tarballs
easy to build and install on a wide variety of new and old systems.
As an example, until a little over a year ago when NTP 4.2.6 was
released, our stable release supported K&R C compilers.  Requiring GNU
make would reduce the portability of NTP, unless we were to bundle GNU
make source and build it when needed, which is likely not an option
for us as our project has a BSD-derived license.

Cheers,
Dave Hart



Re: AM_COND_IF for earlier Automake

2010-12-19 Thread Dave Hart
On Sun, Dec 19, 2010 at 14:13 UTC, Ralf Wildenhues
 wrote:
> * Dave Hart wrote on Sun, Dec 19, 2010 at 02:47:58PM CET:
>> On Sun, Dec 19, 2010 at 10:48 UTC, Ralf Wildenhues wrote:
>> > * Dave Hart wrote on Sat, Dec 18, 2010 at 07:57:13PM CET:
>> >> m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF],
>> >> [m4_ifndef([$1_TRUE],
>> >>          [m4_fatal([$0: no such condition "$1"])])dnl
>> >> if test -z "$$1_TRUE"; then :
>> >>   m4_n([$2])[]dnl
>> >> m4_ifval([$3],
>> >> [else
>> >>   $3
>> >> ])dnl
>> >> fi[]dnl
>> >> ])])
>> >
>> > Looks ok to me.  If you experience problems later, please report back.
>>
>> When tested as above, my AM_COND_IF replacement was occurring with
>> Automake 1.11, leading me to change the m4_fatal message to make it
>> clear it was coming from a AM_COND_IF imposter.  Substituting
>> m4_define for AC_DEFUN cured the problem.  Is it inappropriate to try
>> to conditionalize AC_DEFUN under m4_ifndef?
>
> Can you show the error you got, and maybe also a small example how you
> got it?  I'm not sure I fully understand otherwise.
>
> Normally, AC_DEFUN under m4_ifndef should work ok.  Hmm, you might want
> to move the AC_DEFUN to a new line, as aclocal essentially greps for it.

The error I got was AM_COND_IF: no such condition "..." which was
coming from the replacement AM_COND_IF despite using Automake 1.11.  I
can no longer reproduce that failure, so it was probably a result of
my own misunderstanding.  Now it's working for me with Automake 1.10
and 1.11 with AC_DEFUN on its own line, or combined with the prior.

> With m4_define, you need to ensure yourself that the .m4 file you put
> this in is included in aclocal.m4 (or configure.ac).

Good to know, thanks.  I removed the "no such condition" check because
$1_TRUE is not m4_define()d and I didn't want to get into redefining
AM_CONDITIONAL simply to enable this check in the AM_COND_IF backport.
 Here's what I've hopefully settled on:


dnl AC_CONFIG_FILES conditionalization requires using AM_COND_IF, however
dnl AM_COND_IF is new to Automake 1.11.  To use it on new Automake without
dnl requiring same, a fallback implementation for older Autoconf is provided.
dnl Note that disabling of AC_CONFIG_FILES requires Automake 1.11, this code
dnl is correct only in terms of m4sh generated script.
m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF], [
if test -z "$$1_TRUE"; then :
  m4_n([$2])[]dnl
m4_ifval([$3],
[else
  $3
])dnl
fi[]dnl
])])

Thanks for all the assistance,
Dave Hart



Re: AM_COND_IF for earlier Automake

2010-12-19 Thread Dave Hart
On Sun, Dec 19, 2010 at 10:48 UTC, Ralf Wildenhues
 wrote:
> Hi Dave,
> * Dave Hart wrote on Sat, Dec 18, 2010 at 07:57:13PM CET:
>> m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF],
>> [m4_ifndef([$1_TRUE],
>>          [m4_fatal([$0: no such condition "$1"])])dnl
>> if test -z "$$1_TRUE"; then :
>>   m4_n([$2])[]dnl
>> m4_ifval([$3],
>> [else
>>   $3
>> ])dnl
>> fi[]dnl
>> ])])
>
> Looks ok to me.  If you experience problems later, please report back.

When tested as above, my AM_COND_IF replacement was occurring with
Automake 1.11, leading me to change the m4_fatal message to make it
clear it was coming from a AM_COND_IF imposter.  Substituting
m4_define for AC_DEFUN cured the problem.  Is it inappropriate to try
to conditionalize AC_DEFUN under m4_ifndef?

This seems to be doing the right thing on Automake 1.11.  Not yet
tested with older Automake.

dnl AC_CONFIG_FILES conditionalization requires using AM_COND_IF, however
dnl AM_COND_IF is new to Automake 1.11.  To use it on new Automake without
dnl requiring same, a fallback implementation for older Autoconf is provided.
dnl Note that disabling of AC_CONFIG_FILES requires Automake 1.11, this code
dnl is correct only in terms of m4sh generated script.
m4_ifndef([AM_COND_IF], [m4_define([AM_COND_IF],
[m4_ifndef([$1_TRUE],
   [m4_fatal([$0 backport: no such condition "$1"])])dnl
if test -z "$$1_TRUE"; then :
  m4_n([$2])[]dnl
m4_ifval([$3],
[else
  $3
])dnl
fi[]dnl
])])

Thanks again,
Dave Hart



Re: AM_COND_IF for earlier Automake

2010-12-18 Thread Dave Hart
On Sat, Dec 18, 2010 at 18:28 UTC, Dave Hart  wrote:
> How is this for a AM_COND_IF that works at the m4sh level on older Automake:

I did not properly integrate Ralf's latest AM_COND_IF changes
considering Stefano's feedback about _AM_COND_VALUE_foo on older
Automake.  3rd time's charmed?

m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF],
[m4_ifndef([$1_TRUE],
 [m4_fatal([$0: no such condition "$1"])])dnl
if test -z "$$1_TRUE"; then :
  m4_n([$2])[]dnl
m4_ifval([$3],
[else
  $3
])dnl
fi[]dnl
])])

Thanks for your time,
Dave Hart



Re: AM_COND_IF for earlier Automake

2010-12-18 Thread Dave Hart
How is this for a AM_COND_IF that works at the m4sh level on older Automake:

m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF],
[m4_ifndef([_AM_COND_VALUE_$1],
  [m4_fatal([$0: no such condition "$1"])])dnl
if test -z "$$1_TRUE"; then :
   m4_n([$2])[]dnl
m4_ifval([$3],
[else
   $3
])dnl
fi[]dnl
])])

I do not know the difference between m4_default and m4_n usage here,
I'm just applying the latest AM_COND_IF changes.  If this code is not
appropriate for Automake prior to 1.11, please let me know.

Cheers,
Dave Hart



Re: AM_COND_IF for earlier Automake

2010-12-18 Thread Dave Hart
On Sat, Dec 18, 2010 at 11:56 AM, Ralf Wildenhues
 wrote:
> Hello Dave,
>
> * Stefano Lattarini wrote on Sat, Dec 18, 2010 at 11:18:04AM CET:
>> On Saturday 18 December 2010, Dave Hart wrote:
>> > I'd like a package I depend on to use AM_COND_IF, but it does not want
>> > to demand Automake 1.11 at this point.  Does this seem like a
>> > reasonable solution?
>
>> BTW, I see that automake 1.11 also traces the macros _AM_COND_IF,
>> _AM_COND_ELSE and _AM_COND_ENDIF, which are not present nor traced in
>> 1.10, so my suggestion above might still be insufficient in some
>> situations (if not all).
>
> Exactly.  AC_CONFIG_* instances within AM_COND_IF arguments will be
> mistreated.

I do not expect AM_COND_IF magic regarding AC_CONFIG_FILES to work
correctly when used with pre-1.11 Automake.  Rather, I'm simply
looking for the m4sh if/else to work.

My package _does_ simply require Automake 1.11, as it is needed to get
correct results with our nested subpackages.  See this unrequited
message:

http://lists.gnu.org/archive/html/automake/2010-11/msg00135.html

However, the libopts author does not seem as motivated as me to force
other libopts clients to upgrade to Automake 1.11 at this point.  So
my interest is getting a AM_COND_IF-alike that will do the right thing
m4sh-wise on newer and older Automake.  I only care about the
AC_CONFIG_FILES magic working correctly under Automake 1.11 and later.

>> My advice is: just require Automake 1.11.  That's not unreasonable,
>> since that's anyway a requirement for developers, not for users.
>
> Agreed.
>
> Also, your implementation shares the bug fixed in v1.11-152-g6f6e328:
>
>  - The AM_COND_IF macro also works if the shell expression for the conditional
>    is no longer valid for the condition.

I'll attempt to integrate that patch in my ripoff, thanks.

Dave Hart



AM_COND_IF for earlier Automake

2010-12-18 Thread Dave Hart
I'd like a package I depend on to use AM_COND_IF, but it does not want
to demand Automake 1.11 at this point.  Does this seem like a
reasonable solution?

m4_ifndef([AM_COND_IF], [AC_DEFUN([AM_COND_IF],
[m4_ifndef([_AM_COND_VALUE_$1],
   [m4_fatal([$0: no such condition "$1"])])dnl
if _AM_COND_VALUE_$1; then
  m4_default([$2], [:])
m4_ifval([$3],
else
  $3
])dnl
fi[]dnl
])])

Thanks,
Dave Hart



Re: Is that a way to modify tar.m4?

2010-11-25 Thread Dave Hart
On Fri, Nov 26, 2010 at 2:23 AM, Dave Hart  wrote:
> 1.  Unpack tar
> 2.  configure
> 3.  make
>
> Assuming success so far, then proceed to:
>
> 4.  patch

Sorry, since you are modifying a .m4:

5.  autoreconf
6.  configure
7.  make

You may be able to skip autoreconf, I'm not sure.

Cheers,
Dave Hart



Re: Is that a way to modify tar.m4?

2010-11-25 Thread Dave Hart
On Fri, Nov 26, 2010 at 2:14 AM, xufeng zhang
 wrote:
> On 11/26/2010 06:39 AM, Stefano Lattarini wrote:
>> Just a hunch: have you built the automake and aclocal scripts *before*
>> modifying tar.m4?  If you haven't, well, you should build them first,
>> because the Automake build system uses the very automake and aclocal
>> scripts it ships, so you must be sure to build them before hacking any
>> other part of automake.
>
> You can see I do nothing before modify tar.m4, I don't think it's necessary
> for me to do that.

You should be able to see your steps lead to trouble, and be more
willing to listen when you are told exactly what your problem is and
how to get around it.

1.  Unpack tar
2.  configure
3.  make

Assuming success so far, then proceed to:

4.  patch
5.  make

Enjoy the journey,
Dave Hart



Re: on Windows, BUILT_SOURCES does not append .exe

2010-11-21 Thread Dave Hart
On Sun, Nov 21, 2010 at 22:44 UTC, Vincent Torri  wrote:
> On Sun, 21 Nov 2010, Ralf Wildenhues wrote:
>> * Vincent Torri wrote on Sun, Nov 21, 2010 at 11:14:23PM CET:
>>> If I don't use BUILT_SOURCES, cmapdump binary is not built before
>>> libcmaps, hence cmap_tounicode.c is not created, and compilation of
>>> libcmaps fails.
>>>
>>> Is there another solution ?
>>
>> Yes: just specify cmapdump$(EXEEXT) as prerequisite to cmap_tounicode.c.
>
> isn't what the line:
>
> cmap_tounicode.c: cmapdump $(cmap_tounicode_files)
>
> does ? Note that having that rule is not sufficient on linux (that is, even
> if $(EXEEXT) 'should' (but not 'must', as .exe suffix is not necessary) be
> added on widnows, it does not work on linux)

In that case you may need to add cmap_tounicode.c to BUILT_SOURCES,
leaving cmapdump out of same.

Cheers,
Dave Hart



Automake AM_COND_IF problem: libopts/Makefile.in split personality

2010-11-21 Thread Dave Hart
I've put together a stripped-down example of a problem currently
beguiling Harlan Stenn and myself with Automake used in the NTP
reference implementation:

http://support.ntp.org/people/hart/am-libopts-subpkg.tar.gz

The layout is:

am-libopts-subpkg/
configure.ac
Makefile.am
top.c
subproj/
configure.ac
Makefile.am
sub.c
libopts/
m4/
...

Both top.c and sub.c are libopts clients, sharing the single
subproj/libopts bundled copy.  The problem is that sometimes
subproj/libopts/Makefile.in is generated correctly by the subproj
machinery, and other times it is generated incorrectly as if it were
produced by the top-level configure.  The incorrect resulting
subproj/libopts/Makefile breaks "make distcheck" using GNU Make,
though curiously FreeBSD's make is able to complete distcheck
successfully.  The incorrect subproj/libopts/Makefile contains "subdir
= subproj/libopts" while the working one has "subdir = libopts".  The
distcheck break happens when subproj/libopts/Makefile attempts to
regenerate itself using a missing automake invocation that contains an
extra "subproj/" that makes the filename non-existent.

In trying to straighten this out, I modified our copy of
libopts/m4/libopts.m4, originally from Autogen 5.11.2, to add an
optional second argument to LIBOPTS_CHECK which can be used to
conditionalize the AC_CONFIG_FILES(libopts-dir/Makefile) under
AM_COND_IF.  The top-level configure.ac uses this facility, so that
the top-level config.status does not generate subproj/libopts/Makefile
as it would otherwise.  Instead, subproj/config.status generates (from
its perspective) libopts/Makefile.

The current snag I'm having is with autoreconf -i from the top level.
After autoreconf completes, subproj/libopts/Makefile.in is
consistently incorrect.  This can be worked around with, for example:

autoreconf -i -v --no-recursive ; cd subproj ; autoreconf -i -v ; cd
..   # in the srcdir

After which subproj/libopts/Makefile.in is consistently correct.

The workaround is only temporary, as invariably something triggers
automake self-rebuilding magic and the miscreant libopts/Makefile.in
stops progress again.  It appears autoreconf (or automake invoked from
autoreconf) is not respecting the conditionalization of
AC_CONFIG_FILES(libopts-dir/Makefile) using AM_COND_IF in
LIBOPTS_CHECK, so that a recursive autoreconf gives the wrong
top_builddir and subdir in subproj/libopts/Makefile.in, while
autoreconf from the subproj dir does the right thing.

Your thoughts and wisdom are solicited.  Can we avoid going back to a
separate top-level copy of libopts required by the subpackage?  How
might I convince in-the-field Autoconf and Automake to stop fighting
amongst the packages over which owns subproj/libopts/Makefile?  Can
you help me in my quest to stop grepping Makefiles before kicking off
a "make distcheck"?  :)

Thanks for your time,
Dave Hart



Re: Question about automake fails to pass the correct parameters to pax when using large UID/GID

2010-11-07 Thread Dave Hart
On Mon, Nov 8, 2010 at 6:56 AM, xufeng zhang  wrote:
> On 10/30/2010 03:37 PM, Ralf Wildenhues wrote:
>> I don't think there is much that can be done about this in Automake, as
[...]
>> If you still think that Automake is in the position to do something
>> about this, then please provide more details (as I'm not exactly an
>> expert in archive formats portability).
>
> It seems pax has a potential bug which leads to the whole build hang.
> When use a large GID(>2097151), pax failed to generate file, then if
> use the generated file as pax input, pax cannot recognize the format and
> wait input from the terminal.
> Try this:
> 1. mkdir conftest.dir
> 2. sudo chgrp -R 12345678 conftest.dir
> 3. sudo pax -L -x ustar -w conftest.dir > conftest.tar
> 4. pax -r < conftest.tar
> Then pax will definitely hang.

Why are you reporting this to the Automake mailing list?  Ralf kindly
pointed you in the right direction, and asked you to provide details
_if_ you still think Automake is in a position to help.  I don't see
you suggesting what Automake should be doing differently, instead, I
see a pax bug report sent to the Automake list.

Grumpily,
Dave Hart



Re: Force a file to be compiled always

2010-11-04 Thread Dave Hart
On Thu, Nov 4, 2010 at 13:46 UTC, Valentin David
 wrote:
> You probably want to have one object that has symbols for the date and
> the time, and this object to be depending on other objects.

The NTP reference implementation does something along these lines.
Every time any consituent libraries or objects change for a binary, an
updated version.c is generated containing the build timestamp as well
as a generation counter for repeated builds in the same directory.

version.c contains a line like:
char * Version = "ntpd 4.2.7...@1.2247-o Nov 03 19:17:08.92
(UTC-00:00) 2010  (1)" ;

(1) is the generation counter.

This is wired into the Makefile.am by omitting version.c from
program_SOURCES, adding version.o to program_LDADD, and adding a rule
to build it like:

version.o: $(ntpq_OBJECTS) ../libntp/libntp.a Makefile $(top_srcdir)/version
env CSET=`cat $(top_srcdir)/version` $(top_builddir)/scripts/mkver ntpq
$(COMPILE) -c version.c

mkver is a shell script which uses the version number from
packageinfo.sh, the repository revision from $CSET, and the current
date/time to produce version.c.

The dependency of version.o on Makefile, combined with the practice of
the NTP build shell script of invoking config.status (which
unconditionally regenerates Makefile) before make, means that a run of
the "build" script always generates updated version.c files with the
current timestamp, even if nothing else has changed.

I am not responsible for inventing this scheme, and I may have missed
an important part.

Cheers,
Dave Hart



Re: Automake and Texinfo: clean the info or pdf file

2010-08-31 Thread Dave Hart
On Tue, Aug 31, 2010 at 09:57 UTC, someone
 wrote:
> But my Makefile is also removed :-(
> So, I need to call the configure script to build it again.

./config.status will regenerate the Makefile without a full configure run.

Cheers,
Dave Hart



Re: build .o files to specific directory using automake

2010-03-18 Thread Dave Hart
On Fri, Mar 19, 2010 at 01:31 UTC, S.C. Leung  wrote:
> Here is the structure of the project:
> project root
>  |
>  -- src/
>  |     |
>  |     -- Many subdirectories/ ...
>  -- build/
>  |
>  -- srcipts/
>        |
>        -- configure script
>        |
>        -- other tools
>
> when user run ./configure all object files should be generated under build/
> directory with the same directory structure of src/ directory.

You don't seem to have read any of the replies.  The build directory
is not an attribute of the project layout, it is the user's choice.

> Here's two problems:
> 1. cause usually configure is put under project root, configure even
> couldn't generate Makefile properly.

And you're shooting yourself in the foot putting the configure script
anywhere but the project root.  That is where people are going to look
for it, and while I don't know that you can't make it work, I am
wondering why you feel it is important to be different from every
other Automake and Autoconf client in this regard.

> 2. I think SUB/DIR/ you mentioned above can only be set by VPATH build. That
> means I need to write another script to change current directory and run
> configure there, right?

Only if you insist on swimming against the current.

> I know the structure is weird. I just wonder if automake and autoconf can
> reach there by simply setting some variables.

But you have not explained to the list why you feel the wierd
structure is warranted.  Why don't you get things going the way that's
recommended, first, get some experience, and then starting making
dictates that break POLA after you're less likely to be shooting
yourself in the foot?

Cheers,
Dave Hart




Re: Sun compiler and /usr/local/include

2010-03-06 Thread Dave Hart
On Sat, Mar 6, 2010 at 01:14 UTC, Harlan Stenn wrote:
> You could look at the configure.ac in ntp-dev (or any recent
> ntp-stable), say:
>
>  http://ntp.bkbits.net:8080/ntp-dev/configure.ac?REV=4b6a0c4clgted0re5ogPZQx0QgLLPw
>
> and find the hunk of code starting with:
>
>  AC_MSG_CHECKING([for openssl include directory])
>
> and decide for yourself if there is a better way to go...

I wouldn't recommend using that snippet as a model for two reasons:

1)  It tests for openssl/opensslconf.h but no NTP code includes that file.
2)  it tests for the existence of the header rather than its usability.

If I were to improve that snippet, I would use AC_COMPILE_IFELSE in
the loop to attempt to compile a trivial program that calls an openssl
function #including the same header file(s) as the real openssl client
code in NTP.  Looking at libntp/ssl_init.c, that would be something
like:

CPPFLAGS=$SAVED_CPPFLAGS -I$loop_dir
AC_COMPILE_IFELSE(
AC_LANG_PROGRAM(
[[
#include "openssl/err.h"
#include "openssl/rand.h"
]],
[[
ERR_load_crypto_strings();
    ]]
),
[ break ],
[]
)

Cheers,
Dave Hart




Re: Sun compiler and /usr/local/include

2010-03-05 Thread Dave Hart
On Fri, Mar 5, 2010 at 21:13 UTC, Charles Brown wrote:
> If the file is; /usr/local/include/package/header.h
> and source is;  #include 
> what goes in configure.ac?
>
> AC_CHECK_HEADER([package/header.h]) just says 'no'.

AC_CHECK_HEADER and AC_CHECK_HEADERS aren't designed for this
situation -- they are used when you simply need to know if it's
available so you can #ifdef HAVE_PACKAGE_HEADER_H.  You want to ensure
they're available by modifying some *CPPFLAGS variable if
package/header.h is not found with the default includes.

You will probably find bliss down the road of AC_COMPILE_IFELSE in a
loop that tries first with no additional -I, then with each of a list
of possible include directories, and adds the resulting -I directive
to a *CPPFLAGS variable while still respecting any user CPPFLAGS.

Cheers,
Dave Hart




Re: Sun compiler and /usr/local/include

2010-03-05 Thread Dave Hart
On Fri, Mar 5, 2010 at 19:25 UTC, Charles Brown wrote:
>
> Very new to automake, and can't find an answer to this; What would be put in
> configure.ac to determine whether the detected preprocessor/compiler
> automatically supplies -I/usr/local/include (for example, g++ does, but sun
> CC does not), and if not, how to add it to some CFLAGS variable?

To expand on a point Ralf touched on, the typical Autoconf approach is
to predict as little as possible, and test for exactly what you need.
So I would ask you, when your package in built with Sun cc and fails
for lack of -I/usr/local/include, which header file in
/usr/local/include causes the break?  It should be in the compiler's
error message.  And I would suggest you then consider using a
configure.ac test for that specific header file, assuming your code
#includes it, or for the header file your project #includes which
triggers the inclusion of the wrong/missing header.  That test would
then result in the addition of -I/usr/local/include on your Sun cc
configuration.  At the same time, you could easily expand it later if
another system has that header file in a different directory, by
searching a list of directories for the needed .h, rather than simply
/usr/local/include.

Instead of "my project needs -I/foo/include on system bar", I suggest
"my project needs to find the correct foo.h on all systems".

Cheers,
Dave Hart




Re: cannot create directory `.am2128': Permission denied

2010-02-20 Thread Dave Hart
On Sat, Feb 20, 2010 at 8:52 UTC, Ralf Wildenhues wrote:
> Isn't this still a user error though?  The problem with your info files
> depending on compiled programs is that you then cannot easily distribute
> them?  Put another way, if something in the source tree depends on
> something in the build tree, then that is a packaging bug, because it
> prevents building off a readonly medium, and causes things to be
> regenerated that shouldn't need to.

NTP uses Autogen's autoopts and derives .texi files from the option
definitions file (.def) and from the compiled binary (to get the usage
text) and ran into some issues with the odd dependency.  NTP does not
currently generate .info files so we haven't run into the bug with the
read-only source directory and Automake's .texi.info rule.

NTP 4.2.6 and later[1] solve the dependency problem by avoiding it:
there is no dependency of the .texi on the compiled binary listed,
rather, we rely on the fact that the real source of the usage text
embedded in the compiled binary is the options definitions file, such
as ntpd-opts.def, and any files it includes.  The .texi product is
listed in our .am files under both EXTRA_DIST and noinst_DATA, which
seems to have the desired effect of deferring .texi generation until
after the compiled binary is ready.

If you build from a read-only source directory, no problem, because
your .def file will be older than your .texi, arranged by the
bootstrap script run by someone building from a SCM checkout, or by
the tarball builder.  No attempt to make the .texi will occur.

You might also notice autogen is invoked in NTP builds with the build
dir prepended to $PATH.  This is needed by aginfo.tpl, the template
that is used to generate the .texi/.menu output, because it invokes
the compiled binary from $PATH if it is not in the current directory,
and we invoke it in the source directory (so that we can distribute
the .texi and most users and even developers then don't need the
latest Autogen to build NTP).

Cheers,
Dave Hart
A.K.A. h...@ntp.org

[1]  
http://ntp.bkbits.net:8080/ntp-stable/ntpdc/Makefile.am?PAGE=anno&REV=4b3ae9b3ljwKXdhRVvoPxS6G_K2K5Q
or http://tinyurl.com/yhsd2rt




Re: ifdef expessions in Makefile.am

2009-12-17 Thread Dave Hart
On Thu, Dec 17, 2009 at 22:29 UTC, Joakim Tjernlund wrote:
> AM_CONDITIONAL seems to be an automake 1.11 feature

You're running up against something else.  AM_CONDITIONAL goes back
some time, and has worked splendidly for the NTP reference
implementation built using Automake 1.10.

Cheers,
Dave Hart