Hi,
I am trying to run a project which takes data as a DStream and dumps the
data in the Shark table after various operations. I am getting the
following error :
Exception in thread main org.apache.spark.SparkException: Job aborted:
Task 0.0:0 failed 1 times (most recent failure: Exception
where the error is coming from, can anyone please explain me
the casue of this error and how to handle it. The spark.cleaner.ttl is set
to 4600, which i guess is more than enough to run the application.
Spark Version : 0.9.0-incubating
Shark : 0.9.0 - SNAPSHOT
Scala : 2.10.3
Thank You
Honey Joshi
, Jul 1, 2014 at 3:11 PM, Honey Joshi
honeyjo...@ideata-analytics.com
wrote:
Hi,
I am trying to run a project which takes data as a DStream and dumps the
data in the Shark table after various operations. I am getting the
following error :
Exception in thread main
(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)
We tried it in both spark 0.9.1 as well as 1.0.0 ;scala:2.10.3. Can
anybody help me with the issue.
Thank You
Regards
Honey Joshi
Ideata-Analytics
On Wed, Jul 2, 2014 at 11:57 AM, Honey Joshi
honeyjo...@ideata-analytics.com wrote:
On Wed, July 2, 2014 1:11 am, Mayur Rustagi wrote:
Ideally you should be converting RDD to schemardd ?
You are creating UnionRDD to join across dstream rdd?
Mayur Rustagi
Ph: +1 (760) 203 3257
http
Original Message
Subject: matchError:null in ALS.train
From:Honey Joshi honeyjo...@ideata-analytics.com
Date:Thu, July 3, 2014 8:12 am
To: user@spark.apache.org