[
https://issues.apache.org/jira/browse/SPARK-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon updated SPARK-11579:
---------------------------------
Labels: bulk-closed (was: )
> Method SGDOptimizer and LBFGSOptimizer in FeedForwardTrainer should not
> create new optimizer every time they got invoked
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-11579
> URL: https://issues.apache.org/jira/browse/SPARK-11579
> Project: Spark
> Issue Type: Improvement
> Components: ML
> Affects Versions: 1.6.0
> Reporter: yuhao yang
> Priority: Minor
> Labels: bulk-closed
>
> This is just a small proposal based on some customer feedback. I can send a
> PR if it looks reasonable.
> Currently method SGDOptimizer and LBFGSOptimizer in FeedForwardTrainer create
> new optimizer every time they got invoked, this is not quite intuitive since
> users think they are still using the existing optimizer when they write:
> feedForwardTrainer
> .SGDOptimizer
> .setMiniBatchFraction(0.002)
> yet it actually creates a new optimizer without other properties which were
> set previously.
> A straight-forward solution is to avoid create new optimizer when current
> optimizer is already of the same kind.
> if (!optimizer.instanceof[LBFGS])
> optimizer = new ...
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]