[
https://issues.apache.org/jira/browse/SPARK-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-1270:
-----------------------------
Fix Version/s: (was: 1.0.0)
> An optimized gradient descent implementation
> --------------------------------------------
>
> Key: SPARK-1270
> URL: https://issues.apache.org/jira/browse/SPARK-1270
> Project: Spark
> Issue Type: Improvement
> Components: MLlib
> Affects Versions: 1.0.0
> Reporter: Xusen Yin
> Labels: GradientDescent, MLLib,
>
> Current implementation of GradientDescent is inefficient in some aspects,
> especially in high-latency network. I propose a new implementation of
> GradientDescent, which follows a parallelism model called
> GradientDescentWithLocalUpdate, inspired by Jeff Dean's DistBelief and Eric
> Xing's SSP. With a few modifications of runMiniBatchSGD, the
> GradientDescentWithLocalUpdate can outperform the original sequential version
> by about 4x without sacrificing accuracy, and can be easily adopted by most
> classification and regression algorithms in MLlib.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]