[ 
https://issues.apache.org/jira/browse/SPARK-21321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jong Yoon Lee updated SPARK-21321:
----------------------------------
    Description: 
On shutdown, spark can be very verbose and can spit out errors that causes the 
user to be confused. 

If possible, we should not print those out and just ignore them by changing the 
changing the log level from WARNING to DEBUG.

Also shutdown of spark can take a long time because of the backlog of events in 
the event queue ( warnings like: Message $message dropped. ${e.getMessage}).

This happens more with dynamic allocation on.

I am suggesting to 
1. change the log level when the shutdown is happening and the RPC connections 
are closed(RpcEnvStoppedException).

2. Clear the event queue when RPC module is stopped and spark is shutting down.

  was:
on shutdown spark can be very verbose and spit out errors that cause the user 
to be confused. 

If possible we should not print those out and just ignore them.
This happens more with dynamic allocaiton on.

I am suggesting to change the log level when the shutdown is happening and the 
RPC connections are closed(RpcEnvStoppedException).


> Spark very verbose on shutdown confusing users
> ----------------------------------------------
>
>                 Key: SPARK-21321
>                 URL: https://issues.apache.org/jira/browse/SPARK-21321
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Jong Yoon Lee
>            Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> On shutdown, spark can be very verbose and can spit out errors that causes 
> the user to be confused. 
> If possible, we should not print those out and just ignore them by changing 
> the changing the log level from WARNING to DEBUG.
> Also shutdown of spark can take a long time because of the backlog of events 
> in the event queue ( warnings like: Message $message dropped. 
> ${e.getMessage}).
> This happens more with dynamic allocation on.
> I am suggesting to 
> 1. change the log level when the shutdown is happening and the RPC 
> connections are closed(RpcEnvStoppedException).
> 2. Clear the event queue when RPC module is stopped and spark is shutting 
> down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to