[
https://issues.apache.org/jira/browse/MAHOUT-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348602#comment-15348602
]
ASF GitHub Bot commented on MAHOUT-1837:
----------------------------------------
GitHub user andrewpalumbo opened a pull request:
https://github.com/apache/mahout/pull/244
MAHOUT-1837 flip <= threshold to > at the final return for dense
fix for the incorrect threshold analysis
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/andrewpalumbo/mahout MAHOUT-1837-b
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/mahout/pull/244.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #244
----
commit 1388d8f2d3bbdc50bf0e554b6b9176da2231f7d1
Author: Andrew Palumbo <[email protected]>
Date: 2016-06-24T20:51:36Z
flip <= threshold to > at the final return for dense
----
> Sparse/Dense Matrix analysis for Matrix Multiplication
> ------------------------------------------------------
>
> Key: MAHOUT-1837
> URL: https://issues.apache.org/jira/browse/MAHOUT-1837
> Project: Mahout
> Issue Type: Improvement
> Components: Math
> Affects Versions: 0.12.0
> Reporter: Andrew Palumbo
> Assignee: Andrew Palumbo
> Fix For: 0.13.0
>
> Attachments: compareDensityTest.ods
>
>
> In matrix multiplication, Sparse Matrices can easily turn dense and bloat
> memory, one fully dense column and one fully dense row can cause a sparse
> %*% sparse operation have a dense result.
> There are two issues here one with a quick Fix and one a bit more involved:
> # in {{ABt.Scala}} use check the `MatrixFlavor` of the combiner and use
> the flavor of the Block as the resulting Sparse or Dense matrix type:
> {code}
> val comb = if (block.getFlavor == MatrixFlavor.SPARSELIKE) {
> new SparseMatrix(prodNCol, block.nrow).t
> } else {
> new DenseMatrix(prodNCol, block.nrow).t
> }
> {code}
> a simlar check needs to be made in the {{blockify}} transformation.
>
> # More importantly, and more involved is to do an actual analysis of the
> resulting matrix data in the in-core {{mmul}} class and use a matrix of the
> appropriate Structure as a result.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)