leezu commented on issue #4659: gpu memory allocate will be error when using 
multiprocessing.Process
URL: 
https://github.com/apache/incubator-mxnet/issues/4659#issuecomment-441890199
 
 
   Here is an updated test case
   
   ```
   import numpy as np
   import mxnet as mx
   from multiprocessing import Process, current_process
   
   def test():
       a = mx.random.seed(1)
   
   if __name__ == '__main__':
       a = mx.nd.random_normal(shape=(10,10), ctx=mx.gpu(0))
       runs = [Process(target=test) for i in range(1)]
       for p in runs:
         p.start()
       for p in runs:
         p.join()
   
   ```
   
   Here Cuda is initialized on the parent process before calling the child 
processes. You may argue, that GPU operations in the child processes should not 
be supported, but then the situation must be handled gracefully, ie. throw some 
error on the Python side and not the C++ side. But let's accept the current C++ 
exception. Even then, if we only want to do CPU work in the child process, 
above example will crash as the `random.seed` calls some Cuda related code 
internally. So there is currently no option to have deterministic execution of 
code in the child processes and code may crash at unexpected times (such as 
calling random.seed).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to