[ 
https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13968769#comment-13968769
 ] 

Pat Ferrel commented on MAHOUT-1464:
------------------------------------

I'm running on the localhost spark://Maclaurin.local:7077 master now and 
getting out of heap errors. When I ran locally I just passed in -Xms8000 to the 
JVM and that was fine.

Had to hack the mahoutSparkContext code, there doesn't seem to be a way to pass 
in or modify the conf? Notice the 4g

      conf.setAppName(appName).setMaster(masterUrl)
          .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
          .set("spark.kryo.registrator", 
"org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator")
          .set("spark.executor.memory", "4g")

This worked fine.

My dev machine is not part of the cluster and cannot participate because the 
path to scripts like start-slave.sh is different on the cluster and dev machine 
(Mac vs Linux). If I try to launch on the dev machine but point to a cluster 
managed by another machine it eventually tries to look in IDEA's 
WORKING_DIRECTORY/_temporary for something that is not there--maybe on the 
Spark Master?

I need a way to launch this outside IDEA on a cluster machine, why shouldn't 
the spark_client method work?

Anyway I'll keep trying to work this out, so far local and 'pseudo-cluster' 
work.

> Cooccurrence Analysis on Spark
> ------------------------------
>
>                 Key: MAHOUT-1464
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1464
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Collaborative Filtering
>         Environment: hadoop, spark
>            Reporter: Pat Ferrel
>            Assignee: Sebastian Schelter
>             Fix For: 1.0
>
>         Attachments: MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, 
> MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, run-spark-xrsj.sh
>
>
> Create a version of Cooccurrence Analysis (RowSimilarityJob with LLR) that 
> runs on Spark. This should be compatible with Mahout Spark DRM DSL so a DRM 
> can be used as input. 
> Ideally this would extend to cover MAHOUT-1422. This cross-cooccurrence has 
> several applications including cross-action recommendations. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to