larroy opened a new issue #14999: libiomp5.so not found URL: https://github.com/apache/incubator-mxnet/issues/14999 Can't load libmxnet, libiomp5.so not found. libmxnet is linked dynamically with libomp ``` (py3_venv) piotr@ip-172-31-63-171:0:~/mxnet_master (master)+$ ldd lib/libmxnet.so | grep omp libiomp5.so => not found ``` System: Ubuntu 18.04 ``` Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 72 On-line CPU(s) list: 0-71 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz Stepping: 4 CPU MHz: 1206.897 BogoMIPS: 6000.00 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 25344K NUMA node0 CPU(s): 0-17,36-53 NUMA node1 CPU(s): 18-35,54-71 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke ----------Python Info---------- Version : 3.6.7 Compiler : GCC 8.2.0 Build : ('default', 'Oct 22 2018 11:32:17') Arch : ('64bit', 'ELF') ------------Pip Info----------- Version : 19.1.1 Directory : /home/piotr/mxnet_master/py3_venv/lib/python3.6/site-packages/pip ----------MXNet Info----------- Hashtag not found. Not installed from pre-built package. ----------System Info---------- Platform : Linux-4.15.0-1035-aws-x86_64-with-Ubuntu-18.04-bionic system : Linux node : ip-172-31-63-171 release : 4.15.0-1035-aws version : #37-Ubuntu SMP Mon Mar 18 16:15:14 UTC 2019 ----------Hardware Info---------- machine : x86_64 processor : x86_64 ----------Network Test---------- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0018 sec, LOAD: 0.6058 sec. Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0004 sec, LOAD: 0.1730 sec. Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0003 sec, LOAD: 0.0896 sec. Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0003 sec, LOAD: 0.0640 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0005 sec, LOAD: 0.1418 sec. Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0003 sec, LOAD: 0.0490 sec. ``` ``` (py3_venv) piotr@ip-172-31-63-171:130:~/mxnet_master (master)+$ python Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import mxnet as ms Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/piotr/mxnet_master/python/mxnet/__init__.py", line 24, in <module> from .context import Context, current_context, cpu, gpu, cpu_pinned File "/home/piotr/mxnet_master/python/mxnet/context.py", line 24, in <module> from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass File "/home/piotr/mxnet_master/python/mxnet/base.py", line 214, in <module> _LIB = _load_lib() File "/home/piotr/mxnet_master/python/mxnet/base.py", line 205, in _load_lib lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) File "/usr/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: libiomp5.so: cannot open shared object file: No such file or directory ``` CMake configuration: ``` --- # CMake configuration USE_CUDA: "OFF" # Build with CUDA support USE_OLDCMAKECUDA: "OFF" # Build with old cmake cuda USE_NCCL: "OFF" # Use NVidia NCCL with CUDA USE_OPENCV: "ON" # Build with OpenCV support USE_OPENMP: "ON" # Build with Openmp support USE_CUDNN: "ON" # Build with cudnn support) # one could set CUDNN_ROOT for search path USE_SSE: "ON" # Build with x86 SSE instruction support IF NOT ARM USE_F16C: "ON" # Build with x86 F16C instruction support) # autodetects support if "ON" USE_LAPACK: "ON" # Build with lapack support USE_MKL_IF_AVAILABLE: "ON" # Use MKL if found USE_MKLML_MKL: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) USE_MKLDNN: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) USE_OPERATOR_TUNING: "ON" # Enable auto-tuning of operators IF NOT MSVC USE_GPERFTOOLS: "ON" # Build with GPerfTools support (if found) USE_JEMALLOC: "ON" # Build with Jemalloc support USE_PROFILER: "ON" # Build with Profiler support USE_DIST_KVSTORE: "OFF" # Build with DIST_KVSTORE support USE_PLUGINS_WARPCTC: "OFF" # Use WARPCTC Plugins USE_PLUGIN_CAFFE: "OFF" # Use Caffe Plugin USE_CPP_PACKAGE: "OFF" # Build C++ Package USE_MXNET_LIB_NAMING: "ON" # Use MXNet library naming conventions. USE_GPROF: "OFF" # Compile with gprof (profiling) flag USE_CXX14_IF_AVAILABLE: "OFF" # Build with C++14 if the compiler supports it USE_VTUNE: "OFF" # Enable use of Intel Amplifier XE (VTune)) # one could set VTUNE_ROOT for search path ENABLE_CUDA_RTC: "ON" # Build with CUDA runtime compilation support BUILD_CPP_EXAMPLES: "ON" # Build cpp examples INSTALL_EXAMPLES: "OFF" # Install the example source files. USE_SIGNAL_HANDLER: "ON" # Print stack traces on segfaults. USE_TENSORRT: "OFF" # Enable infeference optimization with TensorRT. USE_ASAN: "OFF" # Enable Clang/GCC ASAN sanitizers. ENABLE_TESTCOVERAGE: "OFF" # Enable compilation with test coverage metric output CMAKE_BUILD_TYPE: "Debug" CMAKE_CUDA_COMPILER_LAUNCHER: "ccache" CMAKE_C_COMPILER_LAUNCHER: "ccache" CMAKE_CXX_COMPILER_LAUNCHER: "ccache" ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
