[
https://issues.apache.org/jira/browse/SPARK-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178833#comment-16178833
]
Sean Owen commented on SPARK-22115:
-----------------------------------
Sounds good as we already have the native acceleration plumbed through for
GEMV, so this should be quite fast.
> Add operator for linalg Matrix and Vector
> -----------------------------------------
>
> Key: SPARK-22115
> URL: https://issues.apache.org/jira/browse/SPARK-22115
> Project: Spark
> Issue Type: Improvement
> Components: ML, MLlib
> Affects Versions: 3.0.0
> Reporter: Peng Meng
>
> For example, there are many code in LDA like this:
> {code:java}
> phiNorm := expElogbetad * expElogthetad +:+ 1e-100
> {code}
> expElogbetad is a breeze Matrix, expElogthetad is a breeze Vector,
> This code will call a blas GEMV, then loop the result (:+ 1e-100)
> Actually, this can be done with only GEMV, because the standard interface of
> gemv is :
> gemv(alpha, A, x, beta, y) //y := alpha*A*x + beta*y
> We can provide some operators (e.g. Element-wise product (:*), Element-wise
> sum (:+)) to Spark linalg Matrix and Vector, and replace breeze Matrix and
> Vector by Spark linalg Matrix and Vector.
> Then for all the cases like: y = alpha*A*x + beta*y, we can call GEMM or GEMV
> for it.
> Don't need to call GEMM or GEMV and then loop the result (for the add) as the
> current implementation.
> I can help to do it if we plan to add this feature.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]