What is it you're actually trying to accomplish?

On Mon, Oct 10, 2016 at 5:26 AM, Samy Dindane <s...@dindane.com> wrote:
> I managed to make a specific executor crash by using
> TaskContext.get.partitionId and throwing an exception for a specific
> executor.
>
> The issue I have now is that the whole job stops when a single executor
> crashes.
> Do I need to explicitly tell Spark to start a new executor and keep the
> other ones running?
>
>
> On 10/10/2016 11:19 AM, Samy Dindane wrote:
>>
>> Hi,
>>
>> I am writing a streaming job that reads a Kafka topic.
>> As far as I understand, Spark does a 1:1 mapping between its executors and
>> Kafka partitions.
>>
>> In order to correctly implement my checkpoint logic, I'd like to know what
>> exactly happens when an executors crashes.
>> Also, is it possible to kill an executor manually for testing purposes?
>>
>> Thank you.
>>
>> Samy
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to