Hi,   

Spark dispatches your tasks to the distributed (remote) executors  

when the task is terminated due to an exception, it will report to the driver 
with the reason (exception)

So, in the driver side, you see the reason of the task failure which actually 
happened in remote end…so, you cannot catch anything in driver side


Best,  

--  
Nan Zhu


On Thursday, October 23, 2014 at 6:40 PM, ankits wrote:

> Hi, I'm running a spark job and encountering an exception related to thrift.
> I wanted to know where this is being thrown, but the stack trace is
> completely useless. So I started adding try catches, to the point where my
> whole main method that does everything is surrounded with a try catch. Even
> then, nothing is being caught. I still see this message though:
>  
> 2014-10-23 15:39:50,845 ERROR [] Exception in task 1.0 in stage 1.0 (TID 1)
> java.io.IOException: org.apache.thrift.protocol.TProtocolException:
> .....
>  
> What is going on? Why isn't the exception just being handled by the
> try-catch? (BTW this is in Scala)
>  
>  
>  
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Exceptions-not-caught-tp17157.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com 
> (http://Nabble.com).
>  
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> (mailto:user-unsubscr...@spark.apache.org)
> For additional commands, e-mail: user-h...@spark.apache.org 
> (mailto:user-h...@spark.apache.org)
>  
>  


Reply via email to