Github user dlwh commented on the pull request:

    https://github.com/apache/incubator-spark/pull/575#issuecomment-35218872
  
    @martinjaggi For how it's usually implemented, that's right. But you can
    quite likely get better performance doing minibatches with dense vector/CSC
    multiply in lieu of a bunch of dot products.
    
    
    On Sun, Feb 16, 2014 at 2:35 PM, Martin Jaggi 
<notificati...@github.com>wrote:
    
    > @fommil <https://github.com/fommil> No matrix operations are performed at
    > all so far, only vector addition (of type dense += sparse). See the code 
in
    > this PR by @mengxr <https://github.com/mengxr> . Vector operations are
    > enough for clustering, classification and regression as currently in 
MLlib.
    > I was referring to the k-Means benchmark posted in the JIRA.
    >
    > —
    > Reply to this email directly or view it on 
GitHub<https://github.com/apache/incubator-spark/pull/575#issuecomment-35218573>
    > .
    >


If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response.
If your project does not have this feature enabled and wishes so, or if the
feature is enabled but not working, please contact infrastructure at
infrastruct...@apache.org or file a JIRA ticket with INFRA.

Reply via email to