Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/16732
> After running some more experiments I was able to reduce the runtime by
another 1.5x factor. So currently the
"toCoordinateMatrix().toIndexedRowMatrix()" is better by a bi
Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/16732
> Looks good @uzadude ; just saw this very old PR. However what about
@akaltsikis 's comment?
@srowen Tbh after 1 year and half i really can't recall many details.
I
Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/16732
Hey guys i was implementing that as an external function in jar to work
with spark 2.1.1. Even if @uzadude improved performance of the 2.x
implementation by creating 2 seperate cases for sparse
Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/16732
has the issue been resolved ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/14068
@uzadude Looking forward to make it work for matrixes bigger than 1Mx1M and
sparsity of 0.001-0.002.
It means that i need to use distributed linear algebra methods. I am using
a
Github user akaltsikis commented on the issue:
https://github.com/apache/spark/pull/14068
@uzadude hey, i am looking to multiply VERY LARGE AND VERY SPARSE matrixes
using Spark. I would love some discussion over it. Can you give me a way to
contact you?
---
If your project is set