vrakesh commented on issue #14917: Changing the learning rate affect the 
accuracy
URL: 
https://github.com/apache/incubator-mxnet/issues/14917#issuecomment-491069897
 
 
   Hi @CynthiaProtector turns out this is expected behavior,  once learning 
rate is set to zero , the training essentially starts, running training on the 
next batch of data with learning rate set to zero basically means you are being 
shown test accuracy rather than training accuracy.
   
   I would suggest a non zero small amount of decay instead, to see if that 
helps.
   
   But setting learning rate to zero essentially means stopping gradient 
descent, no further improvements on weights are made on new batches of data
   
   If this answer , satisfies your query, Feel free to close this issue

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to