Replied Message
| From | Wood Super |
| Date | 05/01/2024 07:49 |
| To | user |
| Subject | unsubscribe |
unsubscribe
Thank you for your reply.
But,sometimes successed, when i rerun the job.
And the job process the same data using the same code.
From: Margusja
Date: 2017-11-09 14:25
To: bing...@iflytek.com
CC: user
Subject: Re: spark job paused(active stages finished)
You have to deal
.
I just want to know whether spark will resubmit the completed tasks if the
latter tasks being executing cannot find the output?
Thanks for any explanation.
--
Bing Jiang
in vertical enterprises. We will
continue to work with the community to develop new features and improve code
base. Your comments and suggestions are greatly appreciated.
Yan Zhou / Bing Xiao
Huawei Big Data team
1 I don't use spark_submit to run my problem and use spark context directly
val conf = new SparkConf()
.setMaster(spark://123d101suse11sp3:7077)
.setAppName(LBFGS)
.set(spark.executor.memory, 30g)
.set(spark.akka.frameSize,20)
val sc = new
I have test it in spark-1.1.0-SNAPSHOT.
It is ok now
发件人: Xiangrui Meng [mailto:men...@gmail.com]
发送时间: 2014年8月6日 23:12
收件人: Lizhengbing (bing, BIPA)
抄送: user@spark.apache.org
主题: Re: fail to run LBFS in 5G KDD data in spark 1.0.1?
Do you mind testing 1.1-SNAPSHOT and allocating more memory
I want to use spark cluster through a scala function. So I can integrate spark
into my program directly.
For example:
When I call count function in my own program, my program will deploy the
function to the cluster , so I can get the result directly
def count()=
{
val master =
You might let your data stored in tachyon
发件人: Jahagirdar, Madhu [mailto:madhu.jahagir...@philips.com]
发送时间: 2014年7月8日 10:16
收件人: user@spark.apache.org
主题: Spark RDD Disk Persistance
Should i use Disk based Persistance for RDD's and if the machine goes down
during the program execution, next