kaknikhil edited a comment on pull request #564:
URL: https://github.com/apache/madlib/pull/564#issuecomment-828873123


   @fmcquillan99 Adding this comment to consolidate the user doc changes
   1. Mention in the user docs that rmsprop and adam do not support momentum 
and that the dataset should always be minibatched
   2. If the default value of beta2 is changed, then that should be reflected 
in the ` Optimizer Parameters ` section
   3. If we expose the `epsilon` param, then add that to ` Optimizer Parameters 
` section
   4. Should we add an example for rmsprop or adam in our user docs ?
   5. For `learning_rate_policy`, the user docs mention `These are defined 
below, where 'iter' is the current iteration of SGD:`. To avoid confusion, 
should we remove the mention of `SGD` given that the learning_rate_policy also 
applies to rmsprop and adam ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to