On Jan 9, 2016 04:12, "Julian Taylor" <jtaylor.deb...@googlemail.com> wrote:
>
> On 09.01.2016 04:38, Nathaniel Smith wrote:
> > On Fri, Jan 8, 2016 at 7:17 PM, Nathan Goldbaum <nathan12...@gmail.com>
wrote:
> >> Doesn't building on CentOS 5 also mean using a quite old version of
gcc?
> >
> > Yes. IIRC CentOS 5 ships with gcc 4.4, and you can bump that up to gcc
> > 4.8 by using the Redhat Developer Toolset release (which is gcc +
> > special backport libraries to let it generate RHEL5/CentOS5-compatible
> > binaries). (I might have one or both of those version numbers slightly
> > wrong.)
> >
> >> I've never tested this, but I've seen claims on the anaconda mailing
list of
> >> ~25% slowdowns compared to building from source or using system
packages,
> >> which was attributed to building using an older gcc that doesn't
optimize as
> >> well as newer versions.
> >
> > I'd be very surprised if that were a 25% slowdown in general, as
> > opposed to a 25% slowdown on some particular inner loop that happened
> > to neatly match some new feature in a new gcc (e.g. something where
> > the new autovectorizer kicked in). But yeah, in general this is just
> > an inevitable trade-off when it comes to distributing binaries: you're
> > always going to pay some penalty for achieving broad compatibility as
> > compared to artisanally hand-tuned binaries specialized for your
> > machine's exact OS version, processor, etc. Not much to be done,
> > really. At some point the baseline for compatibility will switch to
> > "compile everything on CentOS 6", and that will be better but it will
> > still be worse than binaries that target CentOS 7, and so on and so
> > forth.
> >
>
> I have over the years put in one gcc specific optimization after the
> other so yes using an ancient version will make many parts significantly
> slower. Though that is not really a problem, updating a compiler is easy
> even without redhats devtoolset.
>
> At least as far as numpy is concerned linux binaries should not be a
> very big problem. The only dependency where the version matters is glibc
> which has updated its interfaces we use (in a backward compatible way)
> many times.
> But here if we use a old enough baseline glibc (e.g. centos5 or ubuntu
> 10.04) we are fine at reasonable performance costs, basically only
> slower memcpy.

Are you saying that it's easy to use, say, gcc 5.3's C compiler to produce
binaries that will run on an out-of-the-box centos 5 install? I assumed
that there'd be issues with things like new symbol versions in libgcc, not
just glibc, but if not then that would be great...

> Scipy on the other hand is a larger problem as it contains C++ code.
> Linux systems are now transitioning to C++11 which is binary
> incompatible in parts to the old standard. There a lot of testing is
> necessary to check if we are affected.
> How does Anaconda deal with C++11?

IIUC the situation with the C++ stdlib changes in gcc 5 is that old
binaries will continue to work on new systems. The only thing that breaks
is that if two libraries want to pass objects of the affected types back
and forth (e.g. std::string), then either they both need to be compiled
with the old abi or they both need to be compiled with the new abi. (And
when using a new compiler it's still possible to choose the old abi with a
#define; old compilers of course only support the old abi.)

See: http://developerblog.redhat.com/2015/02/05/gcc5-and-the-c11-abi/

So the answer is that most python packages don't care, because even the
ones written in C++ don't generally talk C++ across package boundaries, and
for the ones that do care then the people making the binary packages will
have to coordinate to use the same abi. And for local builds on modern
systems that link against binary packages built using the old abi, people
might have to use -D_GLIBCXX_USE_CXX11_ABI=0.

-n
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to