[
https://issues.apache.org/jira/browse/SPARK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313398#comment-14313398
]
Evan Sparks commented on SPARK-5705:
------------------------------------
This JIRA is a continuation of this thread:
http://apache-spark-developers-list.1001551.n3.nabble.com/Using-CUDA-within-Spark-boosting-linear-algebra-td10481.html
To summarise - high-speed linear algebra operations including, but not limited
to, matrix multiplies and solves have the potential to make certain machine
learning operations faster on spark. However, we've got to be careful to
balance the overheads of copying data/calling out to the GPU with other factors
in the design of the system.
Additionally - getting these libraries compiled, linked, built, and configured
on a target system is unfortunately not trivial. We should make sure we have a
standard process for doing this (perhaps starting with this codebase:
http://github.com/shivaram/matrix-bench).
Maybe we should start with some applications where we think GPU acceleration
could help? Neural nets is one, LDA is another - others?
> Explore GPU-accelerated Linear Algebra Libraries
> ------------------------------------------------
>
> Key: SPARK-5705
> URL: https://issues.apache.org/jira/browse/SPARK-5705
> Project: Spark
> Issue Type: Bug
> Components: MLlib
> Reporter: Evan Sparks
> Priority: Minor
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]