larroy commented on issue #14979: [BUG] Using a package with MKL and GPU versions, using python to open a new process will cause an error URL: https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-517083648 Reproduce with a debug build ``` (py3_venv) piotr@ip-172-31-21-159:0: ~/mxnet [master]> python ~/test.py Segmentation fault: 11 Stack trace: [bt] (0) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(+0x345d9d9) [0x7fc3e00189d9] [bt] (1) /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20) [0x7fc4055e0f20] [bt] (2) /home/piotr/mxnet/build/3rdparty/openmp/runtime/src/libomp.so(+0x34250) [0x7fc3b863c250] [bt] (3) /home/piotr/mxnet/build/3rdparty/openmp/runtime/src/libomp.so(+0x34d3e) [0x7fc3b863cd3e] [bt] (4) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(mxnet::engine::OpenMP::set_reserve_cores(int)+0x6d) [0x7fc3dff68d5d] [bt] (5) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::{lambda()#2}::operator()() const+0x4f) [0x7fc3dff79c0f] [bt] (6) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(std::shared_ptr<mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)1> > mxnet::common::LazyAllocArray<mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)1> >::Get<mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::{lambda()#2}>(int, mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)::{lambda()#2})+0x414) [0x7fc3dff7b0f4] [bt] (7) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, bool)+0x481) [0x7fc3dff7c871] [bt] (8) /home/piotr/mxnet/python/mxnet/../../build/libmxnet.so(mxnet::engine::ThreadedEngine::Push(mxnet::engine::Opr*, mxnet::Context, int, bool)+0x1a8) [0x7fc3dff6d358] ``` Flags: ``` USE_CUDA: "ON" # Build with CUDA support USE_OLDCMAKECUDA: "OFF" # Build with old cmake cuda USE_NCCL: "OFF" # Use NVidia NCCL with CUDA USE_OPENCV: "ON" # Build with OpenCV support USE_OPENMP: "ON" # Build with Openmp support USE_CUDNN: "ON" # Build with cudnn support) # one could set CUDNN_ROOT for search path USE_SSE: "ON" # Build with x86 SSE instruction support IF NOT ARM USE_F16C: "ON" # Build with x86 F16C instruction support) # autodetects support if "ON" USE_LAPACK: "ON" # Build with lapack support USE_MKL_IF_AVAILABLE: "ON" # Use MKL if found USE_MKLML_MKL: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) USE_MKLDNN: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) USE_OPERATOR_TUNING: "ON" # Enable auto-tuning of operators IF NOT MSVC USE_GPERFTOOLS: "ON" # Build with GPerfTools support (if found) USE_JEMALLOC: "ON" # Build with Jemalloc support USE_PROFILER: "ON" # Build with Profiler support USE_DIST_KVSTORE: "OFF" # Build with DIST_KVSTORE support USE_PLUGINS_WARPCTC: "OFF" # Use WARPCTC Plugins USE_PLUGIN_CAFFE: "OFF" # Use Caffe Plugin USE_CPP_PACKAGE: "OFF" # Build C++ Package USE_MXNET_LIB_NAMING: "ON" # Use MXNet library naming conventions. USE_GPROF: "OFF" # Compile with gprof (profiling) flag USE_CXX14_IF_AVAILABLE: "OFF" # Build with C++14 if the compiler supports it USE_VTUNE: "OFF" # Enable use of Intel Amplifier XE (VTune)) # one could set VTUNE_ROOT for search path ENABLE_CUDA_RTC: "ON" # Build with CUDA runtime compilation support BUILD_CPP_EXAMPLES: "ON" # Build cpp examples INSTALL_EXAMPLES: "OFF" # Install the example source files. USE_SIGNAL_HANDLER: "ON" # Print stack traces on segfaults. USE_TENSORRT: "OFF" # Enable infeference optimization with TensorRT. USE_ASAN: "OFF" # Enable Clang/GCC ASAN sanitizers. ENABLE_TESTCOVERAGE: "OFF" # Enable compilation with test coverage metric output CMAKE_BUILD_TYPE: "Release" CMAKE_CUDA_COMPILER_LAUNCHER: "ccache" CMAKE_C_COMPILER_LAUNCHER: "ccache" CMAKE_CXX_COMPILER_LAUNCHER: "ccache" ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
