[
https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938805#comment-13938805
]
Dmitriy Lyubimov commented on MAHOUT-1464:
------------------------------------------
1.
{code}
val C = A.t %*% A
{code}
I don't remember if i actually put in the physical operator for non-skinny A.
There are two distinct algorithms to deal with it. Skinny one (n <= 5000 or
something) uses upper-triangular vector-backed accumulator to combine stuff
right in map. Of course if accumulator does not realistically fit in memory
then another algorithm has to be plugged in for A-squared. See AtA.scala, def
at_a_nongraph(). It currently throws UnsupportedOperation (but everything i
have done so far only uses slim A'A)
2. when using partial functions with mapBlock, you actually do not have to use
({...}) but just { }:
{code}
drmBt = drmBt.mapBlock() {
case (keys, block) =>
//...
keys -> block
}
{code}
> RowSimilarityJob on Spark
> -------------------------
>
> Key: MAHOUT-1464
> URL: https://issues.apache.org/jira/browse/MAHOUT-1464
> Project: Mahout
> Issue Type: Improvement
> Components: Collaborative Filtering
> Affects Versions: 0.9
> Environment: hadoop, spark
> Reporter: Pat Ferrel
> Labels: performance
> Fix For: 0.9
>
> Attachments: MAHOUT-1464.patch
>
>
> Create a version of RowSimilarityJob that runs on Spark. Ssc has a prototype
> here: https://gist.github.com/sscdotopen/8314254. This should be compatible
> with Mahout Spark DRM DSL so a DRM can be used as input.
> Ideally this would extend to cover MAHOUT-1422 which is a feature request for
> RSJ on two inputs to calculate the similarity of rows of one DRM with those
> of another. This cross-similarity has several applications including
> cross-action recommendations.
--
This message was sent by Atlassian JIRA
(v6.2#6252)