Re: Addressing sources dynamically generated by autoconf

2022-11-21 Thread Thomas Jahns
On Nov 21, 2022, at 18:37 , Russ Allbery  wrote:
> Thomas Jahns  writes:
> 
>> I know I can write a Makefile.in myself, but given the number of
>> additional targets users expect, I'd really prefer sticking to build
>> instructions as much as possible and delegate dist, check and other
>> targets to automake. I also know about AC_LIBOBJS, but this facility
>> only seems to address a list of sources I can provide in a verbatim way
>> and not necessarily via some macro or shell variable even, but I'd like
>> to be wrong about that.
> 
> AC_LIBOBJ feels like the right mechanism to me, but perhaps I don't
> understand what you're trying to do.  It does seem to do exactly what you
> are talking about, though: substitute in additional sources based on
> Autoconf results without having to list them explicitly in Makefile.am.
> 
> I think the problem you may be running into is that AC_LIBOBJ requires
> that the file name you give it as an argument be a source file that is
> included in the distribution tarball, whereas you want to generate that
> source file on the fly.  But you can work around that with a bit of
> trickery.  Suppose that you have some probe that conditionally calls
> AC_LIBOBJ([mpich-fix]) based on whether you detect that there's a problem.
> You can then include mpich-fix.c in your distribution tarball as a
> one-line file that contains only:
> 
> #include "mpich-fix-real.c"
> 
> mpich-fix-real.c is *not* included in your distribution tarball.  If the
> configure check succeeds, this file is never added to the sources and thus
> is never compiled and the fact the file doesn't exist doesn't matter.  If
> the configure check fails and you need to fix something, then configure
> generates the file mpich-fix-real.c on the fly, Automake adds mpich-real.o
> to LIBOBJS/LTLIBOBJS, and the compiler follows the #include and builds
> your generated code.
> 
> Does that get at the problem that you're having?


Thanks, I hadn't thought of doing that redirection via #include, it might just 
work. I will need to add some boilerplate files, so I'm open to further 
suggestions if that can be made unnecessary. But otherwise, going for #include 
instead of copying appears workable.

Regards, Thomas




smime.p7s
Description: S/MIME cryptographic signature


Re: Addressing sources dynamically generated by autoconf

2022-11-21 Thread Thomas Jahns

> On Nov 21, 2022, at 17:10 , Jan Engelhardt  wrote:
> On Monday 2022-11-21 16:22, Thomas Jahns wrote:
> 
>> The question consequently is: how would I create a Makefile.am that accounts
>> for a list of C sources, when the sources are not yet present/known from the
>> perspective of automake?
> 
> I don't see that working even without automake. Once make has loaded a
> Makefile, the internal DAG is immutable for all practical considerations. If
> you do not have the source names, what would you give the compiler? Which
> compiler would you even invoke if you do not know whether you are going to 
> have
> a C or C++ file?

this is probably a misunderstanding: I meant I would preferably not list the 
source files in Makefile.am to prevent duplication of the list already needed 
in configure. Also, I wrote that all source files are expected to be C (MPI 
internals are currently always written in C to my knowledge, might change with 
more rust out there someday).

> That is why source file names ought to be known, even if the file itself is
> empty, absent-until-later, or remains completely unused due to conditionals
> along the way.

It's a bummer if I cannot forego naming the files to inform automake that I 
need the C compiler/linker in this file, but that might be so.

>> While installing a fixed MPI library might seem to be the correct way to
>> handle the issue, our users are not typically in a position to demand this
> 
> root is not needed. They can install the fixed MPI library to
> /home/self, use LD_LIBRARY_PATH at runtime and
> 
> ./configure CPPFLAGS=-I/home/self/include LDFLAGS="-L/home/self/lib
> -Wl,-rpath,/home/self/lib"
> 
> for build-time. The upside is that neither libyaxt nor libzz need to 
> bother
> producing a fixed MPI on their own, individually, which slims down all these
> projects.

You would find if you spoke to typical users of HPC software that building MPI 
and associated libraries on their own is a very tedious prospect for them. 
These are typically researchers interested in some simulation outcome and, as 
stated, they are usually operating with little time to spare. Also, without 
in-depth knowledge it can be very difficult to recreate an MPI installation 
that works just like what the HPC site or vendor installed.

Regards, Thomas



smime.p7s
Description: S/MIME cryptographic signature


Addressing sources dynamically generated by autoconf

2022-11-21 Thread Thomas Jahns
Hello,

I'm maintainer of a project that needs to sometimes paper over defects in the 
underlying MPI library by interpositioning patched portions of the MPI library. 
The latest instance is one where MPICH mixed up send and receive arguments on a 
specific function and multiple files from MPICH need to be downloaded, patched, 
compiled and linked first into a test program during configure and later into 
our library. We currently use autoconf/automake/libtool and are very happy with 
what that delivers for building our own sources.

The downloaded sources for a defective MPI library on the other hand have a 
different license from our project and therefore cannot be distributed by us.

While installing a fixed MPI library might seem to be the correct way to handle 
the issue, our users are not typically in a position to demand this and expect 
a timely, affirmative response, or their computing time grant might even run 
out before the issue is fixed correctly. Hence, our approach is messy and 
wouldn't be needed in a better world, but so far it's the best approach we've 
found to keep going and not loose too much valuable time.

As is probably obvious from the above, the need to compile such external 
sources is not well covered by automake and it typically expects to know about 
all potential source files ahead of generating the Makefile.
But since we need the build information already during configure to test for 
each defect and if we can "cure" it, I'd much prefer if I could put everything 
into our autoconf macros and only substitute the needed sources into 
Makefile.in later.

The question consequently is: how would I create a Makefile.am that accounts 
for a list of C sources, when the sources are not yet present/known from the 
perspective of automake? I assume problems to mostly stem from the way automake 
derives which rules/variables to include in Makefile.in and how to handle 
dependencies.

For the patches we currently have, I use one block for MPICH and another for 
Open MPI. This worked reasonably well only for a single .c file in each case. 
With the newly discovered problem arises the need to handle multiple .c files 
and also have an implementation that potentially has more than one defect we 
need to treat for a successful run of the test suite.

In summary, I'd really like to keep as much of the complexity in the autoconf 
macros and only substitute a set of generic SOURCES and FLAGS or similar 
variables in a single Makefile.in, preferably generated by automake. The 
currently used Makefile.am can be found here:

https://gitlab.dkrz.de/dkrz-sw/yaxt/-/blob/v0.9.3.1/src/Makefile.am#L201

I know I can write a Makefile.in myself, but given the number of additional 
targets users expect, I'd really prefer sticking to build instructions as much 
as possible and delegate dist, check and other targets to automake. I also know 
about AC_LIBOBJS, but this facility only seems to address a list of sources I 
can provide in a verbatim way and not necessarily via some macro or shell 
variable even, but I'd like to be wrong about that.

Since I feel like at a cross-roads for where to go with our current ad-hoc 
method and be a bit more systematic, any opinions/ideas on/for the possible 
approaches is highly welcome.

Kind regards, Thomas Jahns

smime.p7s
Description: S/MIME cryptographic signature


Re: How to speed up 'automake'

2022-05-02 Thread Thomas Jahns

On May 2, 2022, at 15:07 , Jan Engelhardt  wrote:
> 
> 
> On Monday 2022-05-02 14:20, Thomas Jahns wrote:
>>>>> Is there a way to speed 'automake' up?
>>>> 
>>>> [...let] ephemeral builds [..] use /dev/shm [...]
>>> 
>>> There ought to be little difference [...] automake, that's nowhere near as
>>> IO-heavy as untarring kernel source archives. It's much more a CPU-bound
>>> task.
>> 
>> I found tmpfs to be faster when there were multiple smallish (less than an fs
>> block) writes to the same file, particularly by different programs. This may
>> be more important in the context of all autotools taken together than 
>> automake
>> alone. Also not all file systems take full advantage of all methods to 
>> prevent
>> the system from hitting disk like XFS does, i.e. results depend on what you
>> compare to.
> 
> But you're just acknowledging that the slowness comes from the _fs_, are you 
> not?

Yes, sure, I was expclity stating in my initial reply that using tmpfs might 
not be what the OP asked for but instead what might actually solve their 
problem of slow builds. I also like having programs make efficient use of 
system resources, but sometimes throwing machine resources at a problem can be 
the most appropriate course of action.

> Indeed, if a source code package consists of 1 files, then configure
> produces another 10k files for the stuff in the ".deps" directories.
> There is not much autotooling can do here, as I believe pregenerating
> those 10k files all with "# dummy" content is to support the least common
> demoniator of /usr/bin/make.
> 
> I wonder, rather than emitting those 8 bytes to .Po/.Plo/.Tpo/etc. files, 
> could
> we emit 0 bytes instead? Then filesystems would have to write only the inode
> and forego extent allocation for the data portion (and save some disk space
> too, as each 8-byte file in practice reserves something like 4K on
> non-packing/non-compressing filesystems).

That might be something to investigate: could autoconf detect GNU parallel and 
call config.status in some way that automatically creates multiple instances? 
Running config.status is usually not the longest part of an autoconf run but 
one that might be reasonably simple to parallelize.

Thomas




smime.p7s
Description: S/MIME cryptographic signature


Re: How to speed up 'automake'

2022-05-02 Thread Thomas Jahns
> On Apr 30, 2022, at 01:31 , Jan Engelhardt  wrote:
> 
> On Friday 2022-04-29 22:59, Thomas Jahns wrote:
>> On 4/27/22 3:49 PM, R. Diez wrote:
>>> Is there a way to speed 'automake' up?
>> 
>> While you are probably looking for system-independent advice, the best 
>> results
>> I've had with speeding up ephemeral builds is to simply use /dev/shm for
>> backing storage on Linux, i.e. first try to put build directories there
>> ($XDG_RUNTIME_DIR is also fine on modern Linux). If the installation is not
>> needed later on, you can also put the installation path there.
> 
> There ought to be little difference, both use the page cache, except
> that using tmpfs carries the usual volatility risks (not backed by a
> real device, susceptible to power loss, etc., blocks other serious
> processes from using resources, and tmpfs objects may get moved to
> swapspace, which isn't great at all considering you get to pick up
> pieces from the swap partition in the event of a power loss.)
> 
> tmpfs may be interesting from a psychological point of view and/or
> when there are a *lot* of files. But automake, that's nowhere near as
> IO-heavy as untarring kernel source archives. It's much more
> a CPU-bound task.

Very much depends on what the programs do: I found tmpfs to be faster when 
there were multiple smallish (less than an fs block) writes to the same file, 
particularly by different programs. This may be more important in the context 
of all autotools taken together than automake alone. Also not all file systems 
take full advantage of all methods to prevent the system from hitting disk like 
XFS does, i.e. results depend on what you compare to.

Thomas





smime.p7s
Description: S/MIME cryptographic signature


Re: How to speed up 'automake'

2022-04-29 Thread Thomas Jahns

On 4/27/22 3:49 PM, R. Diez wrote:

Is there a way to speed 'automake' up?


While you are probably looking for system-independent advice, the best 
results I've had with speeding up ephemeral builds is to simply use 
/dev/shm for backing storage on Linux, i.e. first try to put build 
directories there ($XDG_RUNTIME_DIR is also fine on modern Linux). If 
the installation is not needed later on, you can also put the 
installation path there.


This assumes your development system has a few GB of RAM to spare, but 
nowadays RAM is cheap.


Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Future plans for Autotools

2021-05-03 Thread Thomas Jahns

Hello,

some comments with a similar HPC background:

On 2021-05-02 19:49, FOURNIER Yvan wrote:
[...]

- it is very easy to find the configure options from a previous run at the top 
of the config.log and config.status files, and then copy/past them so as to 
generate an new build in a clean (empty) build directory. This is IMHO a strong 
point compared to CMake, whose caching aspects can lead to complex issues, and 
where regenerating a configuration command is not so easy.


This is indeed valuable, but could be improved by adding the appropriate quotes.

[...]

- autoconf can generate multiple configuration files (for example public/private header 
files), but automake assumes only one is used. I don't remeber the details, but had to 
work around this 15 years ago to re-implement the generation of a "light" 
external config.h in m4 and shell instead of using built-n features due to this.


We also found that we could not install config.h and use a Perl script to insert 
config.h macros into the installed headers.



- Our code is a mix of Fortran and C, with a bit of C++. Automake still deos 
not support Fortran 90-type module dependencies, so we have to manage manual 
dependencies in one of our Makefile.am's. More modern systems handle Fortran 
(not quite the latest fad) much better.


At our site, there are a number of scripts in use that generate the Fortran 
module/file dependencies on demand. For those not knowing Fortran: source files 
are compiled to both, .mod files for use in runs of the compiler and .o files 
for the later link step. The .mod files can follow a number of conventions in 
respect to suffix and upper/lower case basename and suffix.


[...]

- .la files broken by Linux packaging: in the case of an MPI library built with 
various dependencies, some of which are loaded dynamically as plugins, and not 
installed by default with the minimal packages, the .la file lists some 
non-installed (and non-needed) libraries as dependencies.


I found that in general it's advisable to look through .la files and edit them 
if needed.



- incorrect options: recently, trying to use the Cray compiler toolchain on an Arm64 
machine, some low-level dependency related to back-end gcc libraries added a 
"-pthread" option in the link line, while the top-level compiler used a 
different syntax and did not accept that.


We encounter the same problem with the NAG Fortran compiler (equivalent options 
are -Wc,-pthread -Wl,-pthread). The simplest solution was a wrapper around to 
the compiler to patch up the options, another colleague patched ltmain.sh to 
adjust appropriately.



Although I have quite a bit of experience working around libtool's quirlks, I 
was simply not able to build my code using that toolchain on that machine. 
Total failure just before the finish line...


-I have had a similar issue generating shared libraries with with CUDA, though 
in that case I could work around this by duplicating and patching the generated 
libtool file, and adding a Makefile.am rule for .cu files. But I have never 
seen another project optionally using CUDA which bothered doing this rather 
than using another build system (ie. if there is a better solution, I did not 
find it).


Since CUDA depends on the installed kernel module, it's also giving us headaches 
regarding stability of installed libraries like MPI.  There is support for 
optional .cu files in our highest level software, I'll have to look up what the 
responsible colleague did that might have eluded you. We already depend on a 
build environment with the PGI/NVIDIA compiler for OpenACC when GPU support is 
needed and that adds everything for cuda below the layer visible to libtool, we 
might not be seeing your specific issue.


[...]

Libtool has many nice aspects, such as handling of versioning, rpaths, and 
more, but its

insistence that it is always right, even in situations where it produces 
unusable commands can really be a showstopper.


I can second that, we now use 7 patch sets for ltmain.sh to address issues 
regarding compilers not matching the expectations of libtool.



I have not looked at Meson much so far. In the short term, the fact that it 
requires ninja, which is not yet ubiquitous on HPC systems, makes things a bit 
less comfortable in our case, but the fact that is uses Python is an advantage 
(Python is so prevalent in scientific software that just about all users and 
developers in our community need to learn at least some rudiments anyways, 
which is not true of m4, Makefiles, or cmake). So it could become a serious 
contender (whereas the other Python-based contenders such as scons and Waf seem 
to have less momentum).


I found scons to be sufficiently complex to be impenetrable to a number of 
scientists, who otherwise knew and used Python.


Regards, Thomas
--
Thomas Jahns
HPC-Gruppe
Abteilung Anwendungssoftware

Deutsches Klimarechenzentrum GmbH
Bundesstraße 45a 

Re: Constantly changing libtool

2021-04-16 Thread Thomas Jahns

Hi Laurence,

from what I can see, you are trying to solve an issue outside the scope 
autotools, which are AFAIK used in two contexts successfully:


1. users are provided a combination of source program and autotools 
generated files, and, unless they want to change the build system 
itself, have no need to worry about the autotools underpinnings but just 
run configure and make.


2. users receive source without autotools artifacts and are expected to 
run autotools at least once successfully.


Scenario 1. is obviously not helpful if people are expected to also 
change the build system to e.g. include new files. In scenario 2. more 
work than absolutely needed might be expected of users.


It seems your approach is somewhere in between: you supply 
autotools-generated files but expect users to also run autotools. I 
think you can improve outcomes by setting minimum version requirements 
in configure.ac so the consistency of the dev environment improves but 
that would probably not take care of all issues with e.g. new gcc versions.


My suggestion would be to supply more than just the sources: consider 
handing out virtual machine images or docker containers. Doing so would 
enable you to keep much more under control than only the sources. Users 
would receive a complete environment that's almost guaranteed to produce 
identical results to what you prepared for lectures/hands-on sessions.


Thomas Jahns


On 4/15/21 4:41 PM, Laurence Marks wrote:

As you say, it has files generated by autotools in a cvs (anonymous), so
when checking out there is a clash.

---
The software is at http://www.numis.northwestern.edu/Software/, mainly edm
& semper-7.0beta. They are both old, not so great but I still use them
every year or so for teaching, and there are a few others who use them.
They are updated via cvs (yes, they are that old). I am updating them
currently (for a class) so they may be changing.

At least for edm most of the problems are with fftw-2.1.5 (
http://www.fftw.org/download.html) which is also somewhat old. (Updating
the code to use more recent fftw would be a pain.)

On Thu, Apr 15, 2021 at 9:27 AM Peter Johansson  wrote:


Hi Laurence,

On 15/4/21 6:35 am, Bob Friesenhahn wrote:

Most problems seem to stem from packages providing the generated files
from Autoconf, Automake, and libtool so that end users don't need to
have the tools available.


It will be easier for people here to help if you provide a bit more
information about your set-up. Is the problem when building from a
tarball or when building from git/subversion? If the latter, do have you
checked in files generated by autotools into git/subversion, so when
checking out/cloning the source to a system with other autotools
versions, there becomes a clash between versions?

Cheers,

Peter









--
Thomas Jahns
HPC-Gruppe
Abteilung Anwendungssoftware

Deutsches Klimarechenzentrum GmbH
Bundesstraße 45a • D-20146 Hamburg • Germany

Phone:  +49 40 460094-151
Fax:+49 40 460094-270
Email:  Thomas Jahns 
URL:www.dkrz.de

Geschäftsführer: Prof. Dr. Thomas Ludwig
Sitz der Gesellschaft: Hamburg
Amtsgericht Hamburg HRB 39784



smime.p7s
Description: S/MIME Cryptographic Signature


Re: pkg-conf and LD* variables

2018-10-28 Thread Thomas Jahns

Hi,

On 10/28/18 1:42 AM, Harlan Stenn wrote:

pkg-conf offers the following ways to get "link flags":

  --libsAll link flags


use the above and put it in the LIBS variable. No need to make this more 
complicated.



  --libs-only-L  The -L/-R stuff
  --libs-only-l The -l stuff
  --static  Output ilbraries suitable for static linking

and then says the union of -only-L and -only-l may be smaller than the
output of -libs because the latter might include flags like -rdynamic.

We're already having to parse the output of pkg-config to try and
separate LDADD stuff from LDFLAGS stuff,  And this is before we start to
address static v. dynamic things.

This is because we really don't want -L stuff in LDADD, because we end
up with things like:

  No rule to make: -L /usr/local/lib

(or something).


Which is why the LIBS variable exists, where you can put all this and be 
done with it.


Regards, Thomas



Re: compile .c files as C++?

2018-01-25 Thread Thomas Jahns

On 01/25/18 04:07, Jay K wrote:

I have a bunch of C.
I want to move to C++.
I'm using automake.
I don't want to rename any files.


that's unwise and not going to serve you well in the long run. I very much 
advise against not renaming files converted from C to C++. While it can be done, 
this is very likely to lead to a fragile setup that's gonna encounter lots of 
bugs in edge cases. Don't do this, rename your converted files instead.


If you absolutely have to, consider creating dummy C++ files that #include the 
corresponding .c file.


C and C++ are sufficiently different languages today that having your tools make 
the correct assumptions is crucial. Don't trust random 
debuggers/profilers/IDEs/whatever to figure this out.


Regards, Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Mapfile missing from shared object artifact?

2017-11-09 Thread Thomas Jahns

On 11/09/17 05:10, Jeffrey Walton wrote:

On Wed, Nov 8, 2017 at 3:53 PM, Jeffrey Walton  wrote:

On Tue, Nov 7, 2017 at 12:33 PM, Jeffrey Walton  wrote:

I'm trying to run 'make check' on Solaris. It results in:

$ ./cryptestcwd v
ld.so.1: cryptestcwd: fatal:
/export/home/cryptopp/.libs/libcryptopp.so.6: hardware capability
(CA_SUNW_HW_1) unsupported: 0x480  [ AES SSE4.1 ]
Killed


Any thoughts on this issue?


I gave up trying to have Automake use the flag. When I finally got it
applied using 'sed' on 'libtool', libtool choked on the option.

We ended up hijacking libtool's postlink_cmds (which was empty), and
inserting a script that used elfedit to insert the capabilities we
wanted:

 elfedit -e 'cap:hw1 0x1800' .libs/libcryptopp.so.6.0.0

Libtool probably should have warned it was discarding an important
f**k'ing option instead of trying to sneak it by. Silent failures
waste everyone's time and make my blood boil.

Also see https://github.com/noloader/cryptopp-autotools/commit/61828068c6ab.


Our[1] solution to the problem of libtool swallowing options is to escape known 
problematic options with -Xlinker or -Xcompiler prior to emitting the Makfiles. 
And since we run configure tests with libtool already active,  that happens 
relatively early. We have more problems with Fortran compilers, so that's 
reflected in the corresponding macro _ACX_LT_FORT_FLAGS_MANGLE in 
m4/acx_use_libtool_configuration.m4 but the idea equally applies to C/C++ 
compiler options.


Regards, Thomas

[1] https://www.dkrz.de/redmine/projects/scales-ppm



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Automake Digest, Vol 175, Issue 3

2017-09-07 Thread Thomas Jahns

Hello,

On 09/06/17 00:57, Nick Bowler wrote:

On 2017-09-05, Kip Warner  wrote:
[...]

Hey Thomas. Good question. It could well be that no hackery at all is
required with this. Here is my Makefile.am:

https://github.com/cartesiantheatre/narayan-designer/blob/master/Source/Makefile.am

See parser_clobbered_source_full_paths as an example. This variant
containing the full path is used in BUILT_SOURCES, nodist_..._SOURCES,
CLEANFILES, and as a target.

The parser_clobbered_source_files_only variant containing the file
names only is used on line 150 as a workaround for where bisonc++(1)
emits its files.

If you can see a more elegant way of solving the same problem I'm
trying to, I'm all ears.


If your only uses of the directoryless-filenames are in rules, then
just write the names including directories in the make variables,
then strip off the directory components inside the rule.  In rules
you can use the much more powerful shell constructs.

Example:

   % cat >Makefile <<'EOF'
   FOO = a b/c d/e/f

   my_rule:
for i in $(FOO); do \
  case $$i in */*) i=`expr "$$i" : '.*/\(.*\)'`; esac; \
  printf '%s\n' "$$i"; \
done
EOF
   % make my_rule
   a
   c
   f


Really the only part, where names need to be included verbatim is in so-called 
automake primaries. Those need the names for the make dist rules. In other 
words, files that are not distributed don't need to be spelled out, the name can 
be computed instead. If you want to add or remove the directory part is then 
more a question of whether the files with or without need to go into the 
distribution tar.xz.



If you assume a reasonably-POSIXish shell, you can use something like
$${i##*/} to strip directory parts instead (I think this form will fail
on at least Solaris /bin/sh).


That's an argument in favor of adding the directory part since no comparable 
portability headache applies to


"dir/$$i"

Regards, Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Portable $addprefix

2017-09-04 Thread Thomas Jahns

On 08/25/17 04:02, Kip Warner wrote:

I'd like to transform the following variable in my Makefile.am from...

 files_only = a.foo b.foo c.foo d.foo ...

Into...

 files_with_path = dir/a.foo dir/b.foo dir/c.foo dir/d.foo ...


Can you give more context why you need to substitute on the left-hand-side here? 
It's after all simply a Makefile-Variable, so I don't exactly see what's the 
purpose.


Regards, Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Creating a link with automake

2017-01-23 Thread Thomas Jahns

On 01/20/2017 11:21 AM, Bernhard Seckinger wrote:

I've got a program, that contains some php-script frontend (cli not web)
(and other stuff, which is irrelevant here). I've put the php-scripts into
$pkgdatadir. Now I'd like to have a link from $bindir to the main script i.e.

ln -s ${pkgdatadir}/croco.php ${bindir}/croco

To do this I've added to the Makefile.ac the following:

install-exec-local:
mkdir -p ${bindir}
ln -s ${pkgdatadir}/croco.php ${bindir}/croco

When using "make install" this works. But when I run "make distcheck" I get an
error, telling that I'm not allowed to create the ${bindir}. I've allready
tried to replace the mkdir command with

${srcdir}/../install-sh -d ${bindir}

which is probably architecture-independend, but I still get a similar error.

Does anyone know, how to do this?


I think the SCRIPTS primary is what you're searching for:

https://www.gnu.org/software/automake/manual/automake.html#Scripts

Regards, Thomas




smime.p7s
Description: S/MIME Cryptographic Signature


Using libtool even when not strictly needed

2016-09-26 Thread Thomas Jahns

Hello,

I've run into the problem where a compiler we regularly use (NAG Fortran 
compiler) has flag conventions that are at odds with libtool (i.e. nagfor uses 
-Wc,opt to pass opt to the C compiler it uses as backend). For this reason I'd 
like to add the compiler options to FCFLAGS in a form that's escaped for the 
purposes of libtool, i.e. prefixed with -Xcompiler or -Xlinker.


This works well but at the moment requires me to override FCCOMPILE like this in 
Makefile.am (note the -static flag):


FCCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=FC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=compile $(FC) -static $(AM_FCFLAGS) $(FCFLAGS)

This way all compilations are passed through libtool and options are subjected 
consistently to the same parsing sequence. I'm not sure how this would affect 
people who want to build pie executables, but that's an issue for later.


Now the question remains, since FCCOMPILE is really something that automake 
ought to setup properly and corresponding to automake version, how to achieve 
the same in a more robust fashion?


Regards, Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Turn off C compiler warnings in automake

2015-06-30 Thread Thomas Jahns

On 06/29/15 17:31, Alex Vong wrote:

Thanks for telling me there is no portable flag for doing so.
I am now using AC_SUBST() to set the value of STREAM and append `
$(STREAM)>/dev/null' to every make command. If the user configure with
--enable-verbose-compiler, then STREAM will be set to 0, otherwise
STREAM will be set to 2. This keeps the flexibility. Does it sound
reasonable?


I'm not sure if an output redirection on stdin is portable. But I think given 
the way you are going about this you could easily set a Makefile variable to the 
full redirection (or none), i.e. lose the >/dev/null and instead make STREAM be 
either '2>/dev/null' or the empty string ''?


Regards, Thomas
--
Thomas Jahns
HD(CP)^2
Abteilung Anwendungssoftware

Deutsches Klimarechenzentrum GmbH
Bundesstraße 45a • D-20146 Hamburg • Germany

Phone:  +49 40 460094-151
Fax:    +49 40 460094-270
Email:  Thomas Jahns 
URL:www.dkrz.de

Geschäftsführer: Prof. Dr. Thomas Ludwig
Sitz der Gesellschaft: Hamburg
Amtsgericht Hamburg HRB 39784



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Turn off C compiler warnings in automake

2015-06-29 Thread Thomas Jahns

Hi Alex,

On 06/28/15 16:21, Alex Vong wrote:

Besides, the code base is quite old and as we know compilers always
add new warnings. I have asked upstream about fixing the warnings, but
it seems there is no easy way to fix all of them. So I want to know is
there a portable way to silent all compiler warnings? Since there are
lots of warnings even without '-Wall -Wextra'. I want to know how do
you think about it.


since you are using gcc, there are -fsyntax-only and -w which should provide 
less verbose builds. But there are only very few portable compiler options (like 
-I, -D) and to my knowledge none address warnings.


You could of course redirect all compiler stderr output to /dev/null and thus 
get rid of it, i.e. add 2>/dev/null to your make calls. But this will make 
debugging build failures harder later on.


Regards, Thomas




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Turn off C compiler warnings in automake

2015-06-22 Thread Thomas Jahns

Hello,

On 06/22/15 15:44, Alex Vong wrote:

Is there any easy way to turn off c compiler warnings (those printed
to stderr) portably?


From my point of view, the easy way is to write portable code which does not 
generate warnings. This is also the preferred and recommended way.


But automake is quite agnostic in this regard: no part of it (at least that I 
know of) makes the compiler generate more warnings than the code produces anyway 
with the selected compiler warning flags.


I think you are trying to handle something that is quite external to automake 
within, which can only lead to problems later on.


Please clarify under what circumstances you get unwelcome warnings. If a user of 
some package builds it with e.g. gcc -Wall, said user is in my opinion entitled 
to all the warnings that gives.


Regards, Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to use ld options correctly for --whole-archive in automake and libtool

2015-03-06 Thread Thomas Jahns

On 03/05/15 18:28, Andy Falanga (afalanga) wrote:

-Original Message-
From: Thomas Jahns [mailto:ja...@dkrz.de]
Sent: Wednesday, March 04, 2015 5:45 PM
To: Andy Falanga (afalanga)
Cc: automake@gnu.org
Subject: Re: How to use ld options correctly for --whole-archive in
automake and libtool


This is, from my point of view not so much an automake issue but a
libtool problem: libtool for some reason I don't know decided to make -
Wl, be the prefix for options to be passed to libtool when it's also
the prefix for gcc and various other compilers (which serve as link
editor frontend) for options to pass to the linker and then decides
later to pass this on to the linker program (CC/CXX/whatever). These
"options" are then re-ordered with respect to non-options (like
your .a files) which makes it difficult to pass them in the correct


For this very reason, I decided to subscribe to libtool's mailing list
also and ask the same question there.  As yet, however, I've received no
responses.  Since the tools worked together, it still made sense to ask
here as well.


position. You might be able to work around this with judicious use of
extra -Wl, prefixes like this:

sata_la_LIBADD = -Wl,-Wl,,--whole-archive,../Shared/HwMgmt/.libs/
libhwmgmt.a,../Shared/Misc/.libs/libmisc.a,.libs/libsata.a,-Wl,,--no-
whole-archive -lz -lrt



A very interesting suggestion.  I shall have to try this.  I found
something similar, after the many, many searches I've done with
Google.  It didn't have this many uses of "-Wl," though so that is
quite interesting.


Also it should be noted that you would need to make the files in libhwmgmt.a be 
PIC objects instead of non-PIC as libtool will create by default.



This is a good question.  I have been asking myself the same thing.
Using these tools opens up a newer deployment method than we've used
to this point.  I do still have to answer a question of how I shall
statically link with other libraries, most notably Boost.  The systems
we deploy to will either not have these libraries altogether or they
have such woefully out of date versions that statically linking with
these other libraries is the only option.  How would I ensure I statically
link with these libraries using the automake process?


That's not something you want to fix from within automake but rather specify 
something like LIBS="-Bstatic -lboost -Bdynamic" at configure time since it 
addressed a specific issue of the build you intend to do and not your package 
itself.


But you can still link dynamically to different library versions than those 
installed on the system by making use of -Wl,-rpath on Linux. I haven't gone 
through the process for plugins myself so I might be missing something that 
forces inclusion of the objects.


Regards, Thomas




smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to use ld options correctly for --whole-archive in automake and libtool

2015-03-06 Thread Thomas Jahns

Hello Vincent,

On 03/05/15 18:43, Vincent Torri wrote:

I would also use $(top_builddir) instead of relative path. It could
help when building in another directory than the source one (like with
make distcheck)


I cannot see how .. would ever refer to the source tree and since libhwmgmt.la 
is guaranteed to get created in the build tree this doesn't help in any way 
IMHO. The *abs* variants of various directories are indeed helpful in many 
situations for the build tree (i.e. when changing directory in a rule).


Thomas



smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to use ld options correctly for --whole-archive in automake and libtool

2015-03-04 Thread Thomas Jahns

Hello Andy,

On Mar 4, 2015, at 20:05 , Andy Falanga (afalanga) wrote:
The team I work with has a project which is a C++ shared library  
which is exposed in two different manners.  The first is a  
traditional C++ library and the second is it's exposed, through  
Boost.python, as a module.  I found some great references for how to  
make this happen.  However, I've run into a problem: wrapping, in  
their entirety, the libraries used in making the python module.  In  
brief, the code is laid out thusly:


Main  (has a Makefile.am and configure.ac files)
 |
 |-Shared
 ||
 ||- HwMgmt  (Makefile.am, makes libhwmgmt.a and libhwmgmt.so)
 ||- Misc(Makefile.am, likewise for libmisc)
 |
 |-sata  (Makefile.am, produces libsatacpp.so and sata.so)
 |   |
 |   |-satacpp   (sources for the C++ API for SATA lib)
 |   |
 |   |-sata  (sources wrapping satacpp in boost.python)


Since we distribute this in both a C++ and a python API, I decided  
to simply use the autotools as they are intended to distribute the  
whole thing.  However, there is a problem.  When I build the whole  
thing, from "Main", the libraries which are built for sata.so (thy  
python module) aren't yet built.  So, they've got to be included, or  
linked, when we build sata.so.


These aren't simply convenience libraries.  So, for example, using  
"noinst_LTLIBRARIES = libhwmgmt.la" isn't acceptable.  However, the  
*.a files made must be entirely included into the python module.  In  
our former, home-baked, makefile arrangement, we just built them up  
and included them as follows:


gcc ... -Wl,--whole-archive /path/to/first.a, /path/to/second.a - 
Wl,--no-whole-archive


I'm trying to reproduce this using the automake tools.  I have this  
in the Makefile.am located in sata:


lib_LTLIBRARIES = satacpp.la
satacpp_la_SOURCES = ...

pyexec_LTLIBRAIRES = sata.la
sata_la_LDFLAGS = -module
sata_la_LIBADD = -Wl,--whole-archive ../Shared/HwMgmt/.libs/ 
libhwmgmt.a ../Shared/Misc/.libs/libmisc.a .libs/libsata.a -Wl,--no- 
whole-archive -lz -lrt


As I'm sure no one here will be surprised, this causes automake to  
fail because, "linker flags such as '-Wl,--whole-archive' belong in  
sata_la_LDFLAGS."  However, when I place this there, I find that  
Makefile.in and the final Makefile have things where I expect them,  
but when make runs, the libraries I've specified to be included as  
"whole-archive" are not listed between this option.  Instead, they  
are listed earlier and *nothing is listed* between -Wl,--whole- 
archive -Wl,--no-whole-archive.  I assume that libtool is doing this  
for me.


I'm using automake version 1.11.1, autoconf 2.63.

Any help is greatly appreciated.



This is, from my point of view not so much an automake issue but a  
libtool problem: libtool for some reason I don't know decided to make - 
Wl, be the prefix for options to be passed to libtool when it's also  
the prefix for gcc and various other compilers (which serve as link  
editor frontend) for options to pass to the linker and then decides  
later to pass this on to the linker program (CC/CXX/whatever). These  
"options" are then re-ordered with respect to non-options (like  
your .a files) which makes it difficult to pass them in the correct  
position. You might be able to work around this with judicious use of  
extra -Wl, prefixes like this:


sata_la_LIBADD = -Wl,-Wl,,--whole-archive,../Shared/HwMgmt/.libs/ 
libhwmgmt.a,../Shared/Misc/.libs/libmisc.a,.libs/libsata.a,-Wl,,--no- 
whole-archive -lz -lrt


But what exactly is the problem with using e.g. ../Shared/HwMgmt/ 
libhwmgmt.la? That sata.so will then require libhwmgmt.so? libtool  
should be able to set sata.so's rpath such that libhwmgmt.so will be  
found at runtime.


Regards, Thomas




smime.p7s
Description: S/MIME cryptographic signature


Re: `make check` linking against installed copies of the libraries

2014-02-18 Thread Thomas Jahns
Hello,

On 02/17/14 21:58, Simon Newton wrote:
[library in libdir used instead of uninstalled one]
> I've found by removing the trailing -Wl,-rpath -Wl,/usr/local/lib/olad
> the linker will succeed.  However I can't figure out where -rpath
> /usr/local/lib/olad is coming from.

I think this is an issue with your use of libtool and you might need to ensure
your tests run non-installed binaries prefixed with

$abs_topbuilddir/libtool --mode=execute

although the versions of libtool I know generate wrappers which arrange for that
automatically.

Regards, Thomas
-- 
Thomas Jahns
DKRZ GmbH, Department: Application software

Deutsches Klimarechenzentrum
Bundesstraße 45a
D-20146 Hamburg

Phone: +49-40-460094-151
Fax: +49-40-460094-270
Email: Thomas Jahns 



smime.p7s
Description: S/MIME Cryptographic Signature


Re: question about libtool wrapper and plugins

2014-01-16 Thread Thomas Jahns
Hello Felix,

not an automake issue, but...

On 01/16/14 13:02, Felix Salfelder wrote:
[rpath in uninstalled binary]

The normal mode of using libtool to run the binary with adjusted paths in
circumstances like yours is

./libtool --mode=execute main/program

Regards, Thomas
-- 
Thomas Jahns
DKRZ GmbH, Department: Application software

Deutsches Klimarechenzentrum
Bundesstraße 45a
D-20146 Hamburg

Phone: +49-40-460094-151
Fax: +49-40-460094-270
Email: Thomas Jahns 



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Can't include SQLite libs in compile

2013-12-18 Thread Thomas Jahns

Hi,

On Dec 18, 2013, at 19:34 , Jordan H. wrote:

Relevant part of configure.ac:

   PKG_CHECK_MODULES(SQLITE, [sqlite3 > $SQLITE_REQUIRED_VERSION])


You probably want to turn SQLITE into SQLITE3 there? Also I don't  
think $SQLITE_REQUIRED_VERSION as a shell variable will work the way  
you think. But you could use an m4 define.



   AC_SUBST([SQLITE3_CFLAGS])
   AC_SUBST([SQLITE3_LIBS])



Or instead of putting SQLITE3 above use AC_SUBST([SQLITE3_CFLAGS], 
["$SQLITE_CFLAGS"])


Regards, Thomas
--
Thomas Jahns
DKRZ GmbH, Department: Application software

Deutsches Klimarechenzentrum
Bundesstraße 45a
D-20146 Hamburg

Phone: +49-40-460094-151
Fax: +49-40-460094-270
Email: Thomas Jahns 





smime.p7s
Description: S/MIME cryptographic signature


Re: Can't include SQLite libs in compile

2013-12-18 Thread Thomas Jahns

Hello,

On Dec 18, 2013, at 22:26 , Jordan H. wrote:

Thanks but I'm still getting the same error.


can you provide the result of the following commands to get a better  
idea of what's going wrong:


$ grep SQLITE_ config.log
$ grep program_CFLAGS Makefile

after running configure?


configure.ac:
   ...
   m4_define(SQLITE_REQUIRED_VERSION, 3.0)
   ...
   PKG_CHECK_MODULES(SQLITE, [sqlite >= $SQLITE_REQUIRED_VERSION])
   AC_SUBST([SQLITE_CFLAGS])
   AC_SUBST([SQLITE_LIBS])


That should be without the $ sign. M4 macros don't require any special  
symbols for expansion. Also better quote the definition for the sake  
of consistency like this:



m4_define([SQLITE_REQUIRED_VERSION], [3.0])




I get through ./configure all right.

Makefile.am:

   program_CFLAGS += @SQLITE_CFLAGS@
   program_LDADD += @SQLITE_LIBS@

If I forced the package finding by using pkg-config (which I've  
heard is

a no-no with automake)...

   program_CFLAGS += `pkg-config --cflags sqlite3`
   program_LDFLAGS = `pkg-config --libs sqlite3`

...the program compiles just fine.


your manual commands probe for sqlite3 but the ones generated from  
PKG_CHECK_MODULES will probe for sqlite. I notice on my system there's  
both .pc files. Have you checked with .pc files are found for sqlite  
and sqlite3?


Regards, Thomas
--
Thomas Jahns
DKRZ GmbH, Department: Application software

Deutsches Klimarechenzentrum
Bundesstraße 45a
D-20146 Hamburg

Phone: +49-40-460094-151
Fax: +49-40-460094-270
Email: Thomas Jahns 





smime.p7s
Description: S/MIME cryptographic signature