Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-20 Thread Peter Eisentraut
Am Sunday, 20. July 2008 schrieb Tom Lane:
 * This disables AC_TRY_RUN tests, of course.  The only adverse
 consequence I noticed was failure to recognize that
 -Wl,-dead_strip_dylibs is applicable, which is marginally annoying but
 hardly fatal.

 On the whole I still wouldn't trust cross-compiled configure results.
 Better to get your prototype pg_config.h from the real deal.

For example, I'm a bit curious on the following aspect.  This program should 
fail to compile on 32-bit platforms but succeed on 64-bit:

#include stddef.h

struct s { char a; long b; };

int main(int argc, char *argv[])
{
int array[offsetof(struct s, b) - 5];

return 0;
}

What happens if you run gcc -arch i386 -arch ppp64 on it?  Does it require 
success on both output architectures?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-20 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 For example, I'm a bit curious on the following aspect.  This program should 
 fail to compile on 32-bit platforms but succeed on 64-bit:

 #include stddef.h

 struct s { char a; long b; };

 int main(int argc, char *argv[])
 {
 int array[offsetof(struct s, b) - 5];

 return 0;
 }

 What happens if you run gcc -arch i386 -arch ppp64 on it?  Does it require 
 success on both output architectures?

Seems so.  On a current MacBook Pro:

$ cat test.c
#include stddef.h

struct s { char a; long b; };

int main(int argc, char *argv[])
{
int array[offsetof(struct s, b) - 5];

return 0;
}
$ gcc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch i386 -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch x86_64 -c test.c
$ gcc -arch ppc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
$ gcc -arch ppc64 -c test.c
$ gcc -arch i386 -arch x86_64 -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
lipo: can't figure out the architecture type of: 
/var/folders/5M/5MGusdunEbWmuxTsRCYfbk+++TI/-Tmp-//ccfrarXl.out
$ gcc -arch i386  -arch ppc -c test.c
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
test.c: In function 'main':
test.c:7: error: size of array 'array' is too large
lipo: can't figure out the architecture type of: 
/var/folders/5M/5MGusdunEbWmuxTsRCYfbk+++TI/-Tmp-//ccFqrJgr.out
$ 

This doesn't look amazingly well tested though: what I suspect is
happening is that it runs N instances of the compiler (note multiple
errors in the last case) and then tries to sew their output together
with lipo, whether they succeeded or not.  I'll bet the can't figure
out is reflecting not being able to make sense of a zero-length .o
file ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-19 Thread Adriaan van Os

Tom Lane wrote:

The bad news is that if you only do that, only the arch that you
actually build on will work.  We have configure set up to insert
various hardware-dependent definitions into pg_config.h and
ecpg_config.h, and if you don't have the right values visible for
each compilation, the resulting executables will fail.

You can get around that by hacking up the generated config files
with #ifdef __i386__ and so on to expose the correct values of
the hardware-dependent symbols to each build.  Of course you have
to know what the correct values are --- if you don't have a sample
of each architecture handy to run configure against, it'd be easy
to miss some things.  And even then it's pretty tedious.  I am
not sure if it is possible or worth the trouble to try to automate
this part better.


It may be less pain to simply config and build for ppc and i386 in separate build directories and 
then glue the resulting binaries together with lipo 
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/lipo.1.html to make them 
universal.


Regards,

Adriaan van Os


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-19 Thread Tom Lane
Adriaan van Os [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 You can get around that by hacking up the generated config files
 with #ifdef __i386__ and so on to expose the correct values of
 the hardware-dependent symbols to each build.  Of course you have
 to know what the correct values are --- if you don't have a sample
 of each architecture handy to run configure against, it'd be easy
 to miss some things.  And even then it's pretty tedious.  I am
 not sure if it is possible or worth the trouble to try to automate
 this part better.

 It may be less pain to simply config and build for ppc and i386 in separate 
 build directories and 
 then glue the resulting binaries together with lipo 

That might give you working executables, but you still need a
glued-together pg_config.h for installation purposes, if you'd
like people to be able to build extensions against the installation.

In any case, the preceding thread showed exactly how to do it that
way, and it didn't look like less pain to me  ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-19 Thread Florian G. Pflug

Tom Lane wrote:

You can get around that by hacking up the generated config files
with #ifdef __i386__ and so on to expose the correct values of
the hardware-dependent symbols to each build.  Of course you have
to know what the correct values are --- if you don't have a sample
of each architecture handy to run configure against, it'd be easy
to miss some things.  And even then it's pretty tedious.  I am
not sure if it is possible or worth the trouble to try to automate
this part better.


Hm - configure *does* the right thing if CFLAGS is set to *just* -arch 
i386 or -arch ppc (at least on intel hardware, because OSX can run 
ppc binaries there, but not vice versa), right? If this is true, we need
some way to run configure multiple times, once for each arch, but then 
still get *one* set of Makefiles that have all the archs in their CFLAGS..



Modulo the above problems, I was able to build i386+ppc binaries that
do in fact work on both architectures.  I haven't got any 64-bit Apple
machines to play with, so there might be 64-bit issues I missed.
Still, this is a huge step forward compared to what was discussed here:
http://archives.postgresql.org/pgsql-general/2008-02/msg00200.php
I think that my MacBook should be able to build and run 64-bit binaries, 
so I can test that if you want. Do you have a script that does the 
necessary config file magic, or did you do that by hand?


regards, Florian Pflug

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-19 Thread Peter Eisentraut
Am Saturday, 19. July 2008 schrieb Tom Lane:
 The bad news is that if you only do that, only the arch that you
 actually build on will work.  We have configure set up to insert
 various hardware-dependent definitions into pg_config.h and
 ecpg_config.h, and if you don't have the right values visible for
 each compilation, the resulting executables will fail.

I'd imagine a related problem are the run tests in configure.  They will 
produce results for the platform that you run configure on.  More properly, 
you should run configure in cross-compilation mode (twice, and then merge the 
output, as previously described), but I am not sure how that will turn out 
when configure attempts to determine alignment and endianness with 
compilation-only tests.  You should probably check some of those results very 
carefully and help it out with some cache variables.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to universal binaries for Darwin

2008-07-19 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 I'd imagine a related problem are the run tests in configure.  They will 
 produce results for the platform that you run configure on.  More properly, 
 you should run configure in cross-compilation mode (twice, and then merge the
 output, as previously described), but I am not sure how that will turn out 
 when configure attempts to determine alignment and endianness with 
 compilation-only tests.

For the record, I got plausible-looking configure output from tests like

CFLAGS=-arch ppc64 ./configure --host=powerpc64-apple-darwin9.4.0 

Whether it'd actually work I dunno, but it looked plausible.  Two notes:

* You have to use both parts of the recipe: without --host, configure
doesn't think it's cross-compiling, and without CFLAGS, gcc doesn't ;-)

* This disables AC_TRY_RUN tests, of course.  The only adverse
consequence I noticed was failure to recognize that
-Wl,-dead_strip_dylibs is applicable, which is marginally annoying but
hardly fatal.

On the whole I still wouldn't trust cross-compiled configure results.
Better to get your prototype pg_config.h from the real deal.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers