I strongly recommend that execute AT MOST ONE training instance in a GPU.

If there are 2 or more instances in the same GPU, the training time will be 
longer than training these two instances sequentially.

(That's what I found using Linux, as for Windows, the training speed is often 
lower than Linux. If you're training a big network, It is better to install a 
Linux system (I'm using `Manjaro` and training network now.))





---
[Visit Topic](https://discuss.mxnet.io/t/how-to-limit-gpu-memory-usage/6304/7) 
or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.io/email/unsubscribe/97787be82f726ba127793852b7b0e724a0cb2506d316defb27783adabfc112ec).

Reply via email to