I tried a similar approach, it works well for user functions. but I need to
crash tasks or executor when spark application runs "repartition". I didn't
any away to inject "poison pill" into repartition call :(

пн, 11 февр. 2019 г. в 21:19, Vadim Semenov <va...@datadoghq.com>:

> something like this
>
> import org.apache.spark.TaskContext
> ds.map(r => {
>   val taskContext = TaskContext.get()
>   if (taskContext.partitionId == 1000) {
>     throw new RuntimeException
>   }
>   r
> })
>
> On Mon, Feb 11, 2019 at 8:41 AM Serega Sheypak <serega.shey...@gmail.com>
> wrote:
> >
> > I need to crash task which does repartition.
> >
> > пн, 11 февр. 2019 г. в 10:37, Gabor Somogyi <gabor.g.somo...@gmail.com>:
> >>
> >> What blocks you to put if conditions inside the mentioned map function?
> >>
> >> On Mon, Feb 11, 2019 at 10:31 AM Serega Sheypak <
> serega.shey...@gmail.com> wrote:
> >>>
> >>> Yeah, but I don't need to crash entire app, I want to fail several
> tasks or executors and then wait for completion.
> >>>
> >>> вс, 10 февр. 2019 г. в 21:49, Gabor Somogyi <gabor.g.somo...@gmail.com
> >:
> >>>>
> >>>> Another approach is adding artificial exception into the
> application's source code like this:
> >>>>
> >>>> val query = input.toDS.map(_ /
> 0).writeStream.format("console").start()
> >>>>
> >>>> G
> >>>>
> >>>>
> >>>> On Sun, Feb 10, 2019 at 9:36 PM Serega Sheypak <
> serega.shey...@gmail.com> wrote:
> >>>>>
> >>>>> Hi BR,
> >>>>> thanks for your reply. I want to mimic the issue and kill tasks at a
> certain stage. Killing executor is also an option for me.
> >>>>> I'm curious how do core spark contributors test spark fault
> tolerance?
> >>>>>
> >>>>>
> >>>>> вс, 10 февр. 2019 г. в 16:57, Gabor Somogyi <
> gabor.g.somo...@gmail.com>:
> >>>>>>
> >>>>>> Hi Serega,
> >>>>>>
> >>>>>> If I understand your problem correctly you would like to kill one
> executor only and the rest of the app has to be untouched.
> >>>>>> If that's true yarn -kill is not what you want because it stops the
> whole application.
> >>>>>>
> >>>>>> I've done similar thing when tested/testing Spark's HA features.
> >>>>>> - jps -vlm | grep
> "org.apache.spark.executor.CoarseGrainedExecutorBackend.*applicationid"
> >>>>>> - kill -9 pidofoneexecutor
> >>>>>>
> >>>>>> Be aware if it's a multi-node cluster check whether at least one
> process runs on a specific node(it's not required).
> >>>>>> Happy killing...
> >>>>>>
> >>>>>> BR,
> >>>>>> G
> >>>>>>
> >>>>>>
> >>>>>> On Sun, Feb 10, 2019 at 4:19 PM Jörn Franke <jornfra...@gmail.com>
> wrote:
> >>>>>>>
> >>>>>>> yarn application -kill applicationid ?
> >>>>>>>
> >>>>>>> > Am 10.02.2019 um 13:30 schrieb Serega Sheypak <
> serega.shey...@gmail.com>:
> >>>>>>> >
> >>>>>>> > Hi there!
> >>>>>>> > I have weird issue that appears only when tasks fail at specific
> stage. I would like to imitate failure on my own.
> >>>>>>> > The plan is to run problematic app and then kill entire executor
> or some tasks when execution reaches certain stage.
> >>>>>>> >
> >>>>>>> > Is it do-able?
> >>>>>>>
> >>>>>>>
> ---------------------------------------------------------------------
> >>>>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>>>>>>
>
>
> --
> Sent from my iPhone
>

Reply via email to