Hello,

I am not sure why we are linking statically in the conda-forge packages, as a 
gut feeling we should link dynamically there. Wes, can you remember why?

Alex, would it be possible for you to send us the part of the segmentation 
fault that is not private to your modules. That would be a good indicator for 
us what is going wrong.

Typically it is best when you enable coredumps with `ulimit -c unlimited` and 
then run your program as usual. There should be no performance penalty. When 
ist segfaults, run `gdb python core` (note that the core file might also be 
postfixed with the PID but that depends on your system). In gdb type 'thread 
apply all bt full'. Post thd output pf that command and strip away the parts we 
should not see. Most relevant will be the stacktrace of the thread that 
segfaulted.

Uwe 

> Am 16.02.2018 um 23:17 schrieb Alex Samuel <a...@alexsamuel.net>:
> 
> Hello,
> 
> I am having some troubles using the Continuum PyArrow conda package 
> dependencies in conjunction with internal C++ extension modules.
> 
> Apparently, Arrow and Parquet link Boost statically.  We have some internal 
> packages containing C++ code that linking Boost libs dynamicaly.  If we 
> import Feather as well as our own extension modules into the same Python 
> process, we get random segfaults in Boost.  I think what's happening is that 
> our extension modules are picking up Boost's symbols from Arrow and Parquet 
> already loaded into the process, rather than from our own Boost shared libs.
> 
> Could anyone explain the policy for linking Boost in binary distributions, 
> particularly conda packages?  What is your expectation for how other C++ 
> extension modules should be built?
> 
> Thanks in advance,
> Alex
> 

Reply via email to