1 I don't use spark_submit to run my problem and use spark context directly
val conf = new SparkConf()
.setMaster(spark://123d101suse11sp3:7077)
.setAppName(LBFGS)
.set(spark.executor.memory, 30g)
.set(spark.akka.frameSize,20)
val sc = new
I have test it in spark-1.1.0-SNAPSHOT.
It is ok now
发件人: Xiangrui Meng [mailto:men...@gmail.com]
发送时间: 2014年8月6日 23:12
收件人: Lizhengbing (bing, BIPA)
抄送: user@spark.apache.org
主题: Re: fail to run LBFS in 5G KDD data in spark 1.0.1?
Do you mind testing 1.1-SNAPSHOT and allocating more memory
I want to use spark cluster through a scala function. So I can integrate spark
into my program directly.
For example:
When I call count function in my own program, my program will deploy the
function to the cluster , so I can get the result directly
def count()=
{
val master =
You might let your data stored in tachyon
发件人: Jahagirdar, Madhu [mailto:madhu.jahagir...@philips.com]
发送时间: 2014年7月8日 10:16
收件人: user@spark.apache.org
主题: Spark RDD Disk Persistance
Should i use Disk based Persistance for RDD's and if the machine goes down
during the program execution, next