mightydeveloper edited a comment on issue #14607: split_and_load can now handle 
num_ctx > num_data. Github Issue #13909
URL: https://github.com/apache/incubator-mxnet/pull/14607#issuecomment-479461422
 
 
   @wkcn 
   In the example above, `losses` only contains loss terms that are forwarded 
by real data samples. (meaning that unnecessary(possibly fake) context loss are 
not appended from the previous for loop.)
   So I believe when we do `loss.backward()`, only the contexts that are 
appended from the previous for loop will run and calculate gradients for 
previously marked variables from previous for loop. 
   
   For example, suppose we have 3 examples left from the dataset and have 5 GPU 
contexts.
   We would only call `losses.append(loss_fn(out, l))` 3 times, marking 
variables in 0, 1, 2 GPU contexts.
   So when we call 
   ```
   for loss in losses:
       loss.backward()
   ```
   only gradients for GPU 0, 1, 2 will be calculated and when we eventually 
call `trainer.step()`, the weights will be updated.
   Does this answer help you? (I might have misunderstood your concern)
   
   + Also, I noticed that I have an indentation error in my original PR request 
description so I just edited it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to