Re: How to cause a stage to fail (using spark-shell)?

2016-06-19 Thread Jacek Laskowski
Mind sharing code? I think only shuffle failures lead to stage failures and re-tries. Jacek On 19 Jun 2016 4:35 p.m., "Ted Yu" wrote: > You can utilize a counter in external storage (NoSQL e.g.) > When the counter reaches 2, stop throwing exception so that the task >

Re: How to cause a stage to fail (using spark-shell)?

2016-06-19 Thread Ted Yu
You can utilize a counter in external storage (NoSQL e.g.) When the counter reaches 2, stop throwing exception so that the task passes. FYI On Sun, Jun 19, 2016 at 3:22 AM, Jacek Laskowski wrote: > Hi, > > Thanks Burak for the idea, but it *only* fails the tasks that >

Re: How to cause a stage to fail (using spark-shell)?

2016-06-19 Thread Jacek Laskowski
Hi, Thanks Burak for the idea, but it *only* fails the tasks that eventually fail the entire job not a particular stage (just once or twice) before the entire job is failed. The idea is to see the attempts in web UI as there's a special handling for cases where a stage failed once or twice before

Re: How to cause a stage to fail (using spark-shell)?

2016-06-18 Thread Burak Yavuz
Hi Jacek, Can't you simply have a mapPartitions task throw an exception or something? Are you trying to do something more esoteric? Best, Burak On Sat, Jun 18, 2016 at 5:35 AM, Jacek Laskowski wrote: > Hi, > > Following up on this question, is a stage considered failed only

Re: How to cause a stage to fail (using spark-shell)?

2016-06-18 Thread Jacek Laskowski
Hi, Following up on this question, is a stage considered failed only when there is a FetchFailed exception? Can I have a failed stage with only a single-stage job? Appreciate any help on this...(as my family doesn't like me spending the weekend with Spark :)) Pozdrawiam, Jacek Laskowski