Maxim Abalenkov wrote:

Dear all,

I’m looking for guidance please. I would like to make sure, that I use all 
eight of my CPU cores, when I run Python’s 3.9.9 NumPy on my macOS BigSur 12.1. 
When I run my NumPy code, I see in ‘htop’, that only one ‘python’ process is 
running and the core utilisation is 20–25%. I remember in the past, stock 
MacPorts NumPy installation would use Apple’s Accelerate library including the 
multithreaded BLAS and LAPACK 
(https://developer.apple.com/documentation/accelerate). As I understand this is 
no longer the case.

I run Python code using a virtual environment located under

  /opt/venv/zipfstime/lib/python3.9/site-packages/numpy/core

When I change there and issue

  otool -L _multiarray_umath.cpython-39-darwin.so

_multiarray_umath.cpython-39-darwin.so:
        @loader_path/../.dylibs/libopenblas.0.dylib (compatibility version 
0.0.0, current version 0.0.0)
        /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 1281.100.1)

In other words, NumPy relies on openBLAS. Command `port variants openblas` 
returns

OpenBLAS has the variants:
   g95: Build using the g95 Fortran compiler
     * conflicts with gcc10 gcc11 gcc8 gcc9 gccdevel
   gcc10: Build using the MacPorts gcc 10 compiler
     * conflicts with g95 g95 gcc11 gcc8 gcc9 gccdevel
[+]gcc11: Build using the MacPorts gcc 11 compiler
     * conflicts with g95 g95 gcc10 gcc8 gcc9 gccdevel
   gcc8: Build using the MacPorts gcc 8 compiler
     * conflicts with g95 g95 gcc10 gcc11 gcc9 gccdevel
   gcc9: Build using the MacPorts gcc 9 compiler
     * conflicts with g95 g95 gcc10 gcc11 gcc8 gccdevel
   gccdevel: Build using the MacPorts gcc devel compiler
     * conflicts with g95 g95 gcc10 gcc11 gcc8 gcc9
[+]lapack: Add Lapack/CLapack support to the library
   native: Force compilation on machine to get fully optimized library
   universal: Build for multiple architectures

I tried installing the “native” variant of OpenBLAS port with `sudo port 
install openblas +native` and setting the environment variable 
`OMP_NUM_THREADS=8`, but I didn’t see any improvement when running my Python 
code. I would welcome your help and guidance on this subject.
I'm using py39-numpy with default variants:

% port installed py39-numpy openblas
The following ports are currently installed:
  OpenBLAS @0.3.19_0+gcc11+lapack (active)
  py39-numpy @1.21.5_1+gfortran+openblas (active)

I see Python using around 600% CPU on my 6-core machine when running this basic benchmark script: <https://gist.github.com/markus-beuckelmann/8bc25531b11158431a5b09a45abd6276>

If you try that and see how many cores it uses, that will at least tell you if there is something different about your code. If it doesn't use all the cores for you, there are some other environment variables that OpenBLAS looks at that you could check: <https://github.com/xianyi/OpenBLAS#setting-the-number-of-threads-using-environment-variables>

- Josh

Reply via email to