Job aborted due to not serializable exception

2016-06-29 Thread Paolo Patierno
e.spark.SparkException: Job aborted due to stage failure: Failed to serialize task 465, not attempting to retry it. Exception during serialization: java.io.NotSerializableException: org.apache.spark.streaming.amqp.JavaMyReceiverStreamSuite If I change the fn definition with something simpler lik

Re: java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-26 Thread Ted Yu
on what could be causing this?? > > This is the exception that I am getting: > > [MySparkApplication] WARN : Failed to execute SQL statement select * > from TableS s join TableC c on s.property = c.property from X YZ > org.apache.spark.SparkException: Job aborted due to stage failu

java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-26 Thread Sahil Sareen
c on s.property = c.property from X YZ org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 5710.0 failed 4 times, most recent failure: Lost task 4.3 in stage 5710.0 (TID 341269, ip-10-0-1-80.us-west-2.compute.internal): java.io.FileNotFoundException: /mnt/md0/var/lib/

Spark 1.4.2- java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-24 Thread Sahil Sareen
I tried increasing spark.shuffle.io.maxRetries to 10 but didn't help. This is the exception that I am getting: [MySparkApplication] WARN : Failed to execute SQL statement select * from TableS s join TableC c on s.property = c.property from X YZ org.apache.spark.SparkException

RE: Re: Job aborted due to stage failure: java.lang.StringIndexOutOfBoundsException: String index out of range: 18

2015-08-30 Thread Cheng, Hao
ne”) } } From the log “java.lang.ArrayIndexOutOfBoundsException: 71”, seems something wrong with your data, is that your intention? Thanks, Hao From: our...@cnsuning.com [mailto:our...@cnsuning.com] Sent: Friday, August 28, 2015 7:20 PM To: Terry Hole Cc: user Subject: Re: Re: Job ab

Re: Re: Job aborted due to stage failure: java.lang.StringIndexOutOfBoundsException: String index out of range: 18

2015-08-28 Thread ai he
> have all completed, from pool > org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 > in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage > 0.0 (TID 9, 10.104.74.7): java.lang.ArrayIndexOutOfBoundsException: 71 > at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.appl

Re: Re: Job aborted due to stage failure: java.lang.StringIndexOutOfBoundsException: String index out of range: 18

2015-08-28 Thread our...@cnsuning.com
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 9, 10.104.74.7): java.lang.ArrayIndexOutOfBoundsException: 71 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(:23) at $iwC$$iwC$$iwC$$iwC

Re: Job aborted due to stage failure: java.lang.StringIndexOutOfBoundsException: String index out of range: 18

2015-08-28 Thread Terry Hole
10.104.74.6: java.lang.StringIndexOutOfBoundsException > (String index out of range: 18) [duplicate 5] > 15/08/28 17:00:54 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks > have all completed, from pool > 15/08/28 17:00:54 INFO TaskSched

Job aborted due to stage failure: java.lang.StringIndexOutOfBoundsException: String index out of range: 18

2015-08-28 Thread our...@cnsuning.com
INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool 15/08/28 17:00:54 INFO TaskSchedulerImpl: Cancelling stage 9 15/08/28 17:00:54 INFO DAGScheduler: ShuffleMapStage 9 (collect at :31) failed in 0.206 s 15/08/28 17:00:54 INFO DAGScheduler: Job 6 failed: collect at :31, t

Re: Job aborted due to stage failure: Task not serializable:

2015-07-16 Thread Akhil Das
Did you try this? *val out=lines.filter(xx=>{* val y=xx val x=broadcastVar.value var flag:Boolean=false for(a<-x) { if(y.contains(a)) flag=true } flag } *})* Thanks Best Regards On Wed, Jul 15, 2015 at 8:10 PM, Naveen Dabas wrote: > > > > > > I am using the

Job aborted due to stage failure: Task not serializable:

2015-07-15 Thread Naveen Dabas
I am using the below code and using kryo serializer .when i run this code i got this error : Task not serializable in commented line2) how broadcast variables are treated in exceotu.are they local variables or can be used in any function defined as global variables. object StreamingLogIn

Running SparkPi ( or JavaWordCount) example fails with "Job aborted due to stage failure: Task serialization failed"

2015-06-08 Thread Elkhan Dadashov
heduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s 15/06/08 19:03:38 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.063253 s Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serializati

Job aborted

2015-06-05 Thread gibbo87
I'm running PageRank on datasets with different sizes (from 1GB to 100GB). Sometime my job is aborted showing this error: Job aborted due to stage failure: Task 0 in stage 4.1 failed 4 times, most recent failure: Lost task 0.3 in stage 4.1 (TID 2051, 9.12.247.250): java.io.FileNotFoundExce

Job aborted

2015-06-05 Thread Giovanni Paolo Gibilisco
I'm running PageRank on datasets with different sizes (from 1GB to 100GB). Sometime my job is aborted showing this error: Job aborted due to stage failure: Task 0 in stage 4.1 failed 4 times, most recent failure: Lost task 0.3 in stage 4.1 (TID 2051, 9.12.247.250): java.io.FileNotFoundExce

Re: org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

2015-02-10 Thread Akhil Das
registered and have sufficient memory15/02/11 12:22:46 > ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All > masters are unresponsive! Giving up.15/02/11 12:22:46 INFO > TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed

org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

2015-02-10 Thread lakewood
SparkDeploySchedulerBackend: Shutting down all executors15/02/11 12:22:46 INFO SparkDeploySchedulerBackend: Asking each executor to shut downorg.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.at org.apache.spark.scheduler.DAGScheduler.o

Re: Job aborted due to stage failure: Master removed our application: FAILED

2014-08-21 Thread Yana
ng similar this morning, I believe because of ports... -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Job-aborted-due-to-stage-failure-Master-removed-our-application-FAILED-tp12573p12586.html Sent from the Apache Spark User List mailing list archive at

Job aborted due to stage failure: Master removed our application: FAILED

2014-08-21 Thread Kristoffer Sjögren
Hi I have trouble executing a really simple Java job on spark 1.0.0-cdh5.1.0 that runs inside a docker container: SparkConf sparkConf = new SparkConf().setAppName("TestApplication").setMaster("spark://localhost:7077"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); JavaRDD lines = ctx.te

Re: Job aborted due to stage failure: TID x failed for unknown reasons

2014-08-14 Thread jerryye
bump. same problem here. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Job-aborted-due-to-stage-failure-TID-x-failed-for-unknown-reasons-tp10187p12095.html Sent from the Apache Spark User List mailing list archive at Nabble.com

Re: Job aborted due to stage failure: TID x failed for unknown reasons

2014-07-22 Thread Alessandro Lulli
ateway.py", > line 537, in __call__ > File > "/net/antonin/home/user/Spark/spark-1.0.1-bin-hadoop2/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", > line 300, in get_return_value > py4j.protocol.Py4JJavaError: An error occurred while calling o27.collect. > : org.apache

Job aborted due to stage failure: TID x failed for unknown reasons

2014-07-18 Thread Shannon Quinn
1-bin-hadoop2/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o27.collect. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:13 failed 4 times, most recent failure: *TID 32

Re: Job aborted: Spark cluster looks down

2014-03-07 Thread Mayur Rustagi
t;> registered and have sufficient memory >>>> 14/03/05 23:24:43 INFO client.AppClient$ClientActor: Connecting to >>>> master spark://node1:7077... >>>> 14/03/05 23:24:51 WARN scheduler.TaskSchedulerImpl: Initial job has not >>>> accepted any resources; che

Re: Job aborted: Spark cluster looks down

2014-03-06 Thread Mayur Rustagi
/03/05 23:25:03 ERROR cluster.SparkDeploySchedulerBackend: Spark >> cluster looks dead, giving up. >> 14/03/05 23:25:03 INFO scheduler.TaskSchedulerImpl: Remove TaskSet 0.0 >> from pool >> 14/03/05 23:25:03 INFO scheduler.DAGScheduler: Failed to run >> saveAsNewAPIHad

Re: Job aborted: Spark cluster looks down

2014-03-06 Thread Christian
ool > 14/03/05 23:25:03 INFO scheduler.DAGScheduler: Failed to run > saveAsNewAPIHadoopFile at CondelCalc.scala:146 > Exception in thread "main" org.apache.spark.SparkException: Job aborted: > Spark cluster looks down > at > org.apache.spark.scheduler.DAGSchedul

Job aborted: Spark cluster looks down

2014-03-05 Thread Christian
adoopFile at CondelCalc.scala:146 Exception in thread "main" org.apache.spark.SparkException: Job aborted: Spark cluster looks down at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028) ... The ge