mightydeveloper commented on issue #14607: split_and_load can now handle num_ctx > num_data. Github Issue #13909 URL: https://github.com/apache/incubator-mxnet/pull/14607#issuecomment-479504030 > However, it seems that `trainer.step()` will update the weights by using all gradients on 5 GPU contexts. > The gradients on GPU 3, 4 may be not zero-tensors. Thanks for checking and providing relevant code link! So... If I want to zero out gradients, I guess I should either 1. call `trainer._params.zero_grad()` 2. or just `net.collect_params().zero_grad()` (Assuming that I initialized `trainer` with `Trainer(net.collect_params(), ...`) 3. or maybe call `trainer.step(ignore_stale_grad=True)` instead? Would all three of them work? which one would be better?
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
