the deployment mode to yarn-client.
>
> Thanks
> Deepak
>
>
> On Fri, May 13, 2016 at 10:17 AM, Rakesh H (Marketing Platform-BLR) <
> rakes...@flipkart.com> wrote:
>
>> Ping!!
>> Has anybody tested graceful shutdown of a spark streaming in yarn-cluster
>> m
Platform-BLR) <
rakes...@flipkart.com> wrote:
> Ping!!
> Has anybody tested graceful shutdown of a spark streaming in yarn-cluster
> mode?It looks like a defect to me.
>
>
> On Thu, May 12, 2016 at 12:53 PM Rakesh H (Marketing Platform-BLR) <
> rakes...@flipkart.com>
Ping!!
Has anybody tested graceful shutdown of a spark streaming in yarn-cluster
mode?It looks like a defect to me.
On Thu, May 12, 2016 at 12:53 PM Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> We are on spark 1.5.1
> Above change was to add a shutdown
We are on spark 1.5.1
Above change was to add a shutdown hook.
I am not adding shutdown hook in code, so inbuilt shutdown hook is being
called.
Driver signals that it is going to to graceful shutdown, but executor sees
that Driver is dead and it shuts down abruptly.
Could this issue be related to
This is happening because spark context shuts down without shutting down
the ssc first.
This was behavior till spark 1.4 ans was addressed in later releases.
https://github.com/apache/spark/pull/6307
Which version of spark are you on?
Thanks
Deepak
On Thu, May 12, 2016 at 12:14 PM, Rakesh H
Yes, it seems to be the case.
In this case executors should have continued logging values till 300, but
they are shutdown as soon as i do "yarn kill .."
On Thu, May 12, 2016 at 12:11 PM Deepak Sharma
wrote:
> So in your case , the driver is shutting down gracefully ,
So in your case , the driver is shutting down gracefully , but the
executors are not.
IS this the problem?
Thanks
Deepak
On Thu, May 12, 2016 at 11:49 AM, Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> Yes, it is set to true.
> Log of driver :
>
> 16/05/12 10:18:29 ERROR
Yes, it is set to true.
Log of driver :
16/05/12 10:18:29 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
16/05/12 10:18:29 INFO streaming.StreamingContext: Invoking
stop(stopGracefully=true) from shutdown hook
16/05/12 10:18:29 INFO scheduler.JobGenerator: Stopping JobGenerator
Hi Rakesh
Did you tried setting *spark.streaming.stopGracefullyOnShutdown to true *for
your spark configuration instance?
If not try this , and let us know if this helps.
Thanks
Deepak
On Thu, May 12, 2016 at 11:42 AM, Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> Issue i
Issue i am having is similar to the one mentioned here :
http://stackoverflow.com/questions/36911442/how-to-stop-gracefully-a-spark-streaming-application-on-yarn
I am creating a rdd from sequence of 1 to 300 and creating streaming RDD
out of it.
val rdd = ssc.sparkContext.parallelize(1 to 300)
the context gracefully? How is it done? Is
there a signal sent to the driver process?
For EMR, is there a way how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
--
Thanks Regards,
Anshu Shukla
--
Thanks Regards,
Anshu Shukla
an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
--
Thanks Regards,
Anshu Shukla
--
Thanks Regards,
Anshu Shukla
for the local and cluster mode of Spark Standalone as
well as EMR.
Does sbin/stop-all.sh stop the context gracefully? How is it done? Is
there a signal sent to the driver process?
For EMR, is there a way how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks
, is there a way how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
--
Thanks Regards,
Anshu Shukla
stop the context gracefully? How is it done? Is
there a signal sent to the driver process?
For EMR, is there a way how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
--
Thanks Regards,
Anshu Shukla
how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
is it done? Is
there a signal sent to the driver process?
For EMR, is there a way how to terminate an EMR cluster with Spark
Streaming graceful shutdown?
Thanks!
Hello all,
I have a spark streaming application running in a standalone cluster
(deployed with spark-submit --deploy-mode cluster). I am trying to add
graceful shutdown functionality to this application but I am not sure what
is the best practice for this.
Currently I am using this code:
18 matches
Mail list logo