I would also vote for option 1, implemented through a new (string?)
Parameter for SGD.

Also, see a previous discussion here
<http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/MultipleLinearRegression-Strange-results-td5931.html>
about adaptive learning rates.

On Mon, Aug 31, 2015 at 5:42 PM, Trevor Grant <trevor.d.gr...@gmail.com>
wrote:

> https://issues.apache.org/jira/browse/FLINK-1994
>
> There are two ways to set the effective learning rate:
>
>
> Method 1) Several pre-baked ways to calculate the effective learning rate,
> set as a switch. E.g.:
> val effectiveLearningRate = optimizationMethod match
> {
>       // original effective learning rate method for backward compatability
>          case 0 => learningRate/Math.sqrt(iteration)
>       // These come straight from sklearn
>           case 1 => learningRate
>           case 2 => 1 / (regularizationConstant * iteration)
>           case 3 => learningRate / Math.pow(iteration, 0.5) ...
> }
>
> Method2) Make the calculation definable by the user. E.g. introduce a
> function to the class which maybe overridden.
>
> This is a classic trade-off between ease of use and functionality. Method 1
> is easier for novice users/users who are migrating from sklearn. Method2
> will be more extensible- letting users write any old effective learning
> rate calculation they want.
>
> I am leaning toward method 1 because how many people really are writing out
> their own custom effective learning rate (as long as there is a fairly good
> number of 'prebaked' calculators available, and because if someone really
> wants to add a method, it simply requires adding another case.
>
> I want to open this up in case anyone has an opinion, just in case.
>
> Best,
> tg
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
>
> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>

Reply via email to