On 05/07/2017 08:37 PM, Yann Ylavic wrote:
On Sun, May 7, 2017 at 4:15 PM, Dennis Clarke <dcla...@blastwave.org> wrote:

node000 $ echo $CPPFLAGS
-I/usr/local/include -I/usr/local/ssl/include -D_TS_ERRNO
-D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE
[]

During the compile is where we see every sort of non-portable warning or
error we can possibly imagine.  Let's look at the few seconds that we
managed to get from it. In fact, even with a more relaxed  compiler "cc"
we don't get very far.

/usr/local/bin/bash /usr/local/build-1/libtool --silent --mode=compile
/opt/developerstudio12.5/bin/cc   -errfmt=error -erroff=%none -errshort=full
-xstrconst -xildoff -m64 -xarch=sparc -xmemalign=8s -xnolibmil -Xa
-xcode=pic32 -xregs=no%appl -xlibmieee -mc -g -xs -ftrap=%none -Qy
-xbuiltin=%none -xdebugformat=dwarf -xunroll=1 -D_TS_ERRNO
-D_POSIX_PTHREAD_SEMANTICS -D_LARGEFILE64_SOURCE -DHAVE_CONFIG_H
-DSOLARIS2=10 -D_REENTRANT  -I/usr/local/include -I/usr/local/ssl/include
-I/usr/local/include/apr-1 -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS
-D_LARGEFILE64_SOURCE
-I/usr/local/build/apr-util-1.6.0_beta_SunOS5.10_sparcv9.001/include
-I/usr/local/build/apr-util-1.6.0_beta_SunOS5.10_sparcv9.001/include/private
-I/opt/mysql/mysql/include  -I/usr/local/include/apr-1
-I/usr/local/ssl/include -I/usr/local/include  -o
buckets/apr_buckets_file.lo -c buckets/apr_buckets_file.c && touch
buckets/apr_buckets_file.lo
"buckets/apr_buckets_file.c", line 112: error: undefined struct/union
member: read_size

Looks like due to $CCPFLAGS above, an old
"/usr/local/include/apr-1/apr_buckets.h" already exists and is
prioritized over the one in
"/usr/local/build/apr-util-1.6.0_beta_SunOS5.10_sparcv9.001/include".


I was thinking the same thing. I am going to spin up a fully separate
beta site to do this testing with. I want full isolation and to have
apr-1.6.0 sorted first and to remove the entire include directory
 /usr/local/include/apr-1/  from the mix before starting.  This is
the only way to clean out the old before working on the new.

Dennis


Reply via email to