Vikas89 commented on issue #12255: Pretty high cpu load when import mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/12255#issuecomment-443371214
 
 
   Looks like the processes are stuck at gomp_team_start if I use 
multiprocessing
   
   ```
   #0  0x00007f5a797a774a in do_spin (val=22256, addr=addr@entry=0x55f45a6451c4)
       at 
/opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libgomp/config/linux/x86/futex.h:130
   #1  do_wait (addr=addr@entry=0x55f45a6451c4, val=val@entry=22256) at 
/opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libgomp/config/linux/wait.h:66
   #2  0x00007f5a797a7813 in gomp_barrier_wait_end (bar=0x55f45a6451c0, 
state=22256)
       at 
/opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libgomp/config/linux/bar.c:48
   #3  0x00007f5a797a6a1d in gomp_simple_barrier_wait (bar=<optimized out>)
       at 
/opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libgomp/config/posix/simple-bar.h:60
   #4  gomp_team_start (fn=<optimized out>, data=<optimized out>, nthreads=7, 
flags=<optimized out>, team=0x55f45a646790)
       at 
/opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libgomp/team.c:829
   #5  0x00007f5a4f6cd8a8 in ?? () from 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so
   #6  0x00007f5a4f6dee3c in ?? () from 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so
   #7  0x00007f5a4f6df9fd in ?? () from 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so
   #8  0x00007f5a4f6dfb53 in ?? () from 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so
   #9  0x00007f5a4cb5f794 in ?? () from 
/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so
   #10 0x00007f5a7f5336ba in call_init (l=<optimized out>, argc=argc@entry=2, 
argv=argv@entry=0x7ffd800cd128, env=env@entry=0x55f45a0422b0) at dl-init.c:72
   #11 0x00007f5a7f5337cb in call_init (env=0x55f45a0422b0, 
argv=0x7ffd800cd128, argc=2, l=<optimized out>) at dl-init.c:30
   #12 _dl_init (main_map=main_map@entry=0x55f45a485c90, argc=2, 
argv=0x7ffd800cd128, env=0x55f45a0422b0) at dl-init.c:120
   #13 0x00007f5a7f5388e2 in dl_open_worker (a=a@entry=0x7ffd800c8610) at 
dl-open.c:575
   ```
   
   But instead if I use threads, like below, threads don't get stuck. 
   
   ```
   def mxnet_worker():
       print("before import: pid:{}".format(getpid()))
       st_time = time.time()
       import mxnet
       end_time = time.time()
       print("after import: pid:{} time:{}".format(getpid(), end_time - 
st_time))
   
   #read_process = [multiprocessing.Process(target=mxnet_worker) for i in 
range(8)]
   from threading import Thread
   i=0
   while i<8:
       t1 = Thread(target=mxnet_worker)
       t1.start()
       i=i+1
   #for p in read_process:
   #    p.daemon = True
   #    time.sleep(3)
   #    p.start()
   #time.sleep(100000)
   ```
   
   
   Looks like there is issue with fork + openmp , 
   We should check if this is related : 
https://bisqwit.iki.fi/story/howto/openmp/#OpenmpAndFork
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to