[jira] [Commented] (SPARK-17630) jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available for akka

2016-10-19 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589463#comment-15589463
 ] 

Shixiong Zhu commented on SPARK-17630:
--

Just set up SparkUncaughtExceptionHandler  as the default 
UncaughtExceptionHandler in org.apache.spark.api.python.PythonGatewayServer and 
org.apache.spark.api.r.RBackend's main methods.

> jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available 
> for akka
> 
>
> Key: SPARK-17630
> URL: https://issues.apache.org/jira/browse/SPARK-17630
> Project: Spark
>  Issue Type: Question
>  Components: Spark Core
>Affects Versions: 1.6.0
>Reporter: Mario Briggs
> Attachments: SecondCodePath.txt, firstCodepath.txt
>
>
> Hi,
> I have 2 code-paths from my app that result in a jvm OOM. 
> In the first code path, 'akka.jvm-exit-on-fatal-error' kicks in and shuts 
> down the JVM, so that the caller (py4J) get notified with proper stack trace. 
> Attached stack-trace file (firstCodepath.txt)
> In the 2nd code path (rpc.netty), no such handler kicks in and shutdown the 
> JVM, so the caller does not get notified. 
> Attached stack-trace file (SecondCodepath.txt)
> Is it possible to have an jvm exit handle for the rpc. netty path?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17630) jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available for akka

2016-10-18 Thread Mario Briggs (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587664#comment-15587664
 ] 

Mario Briggs commented on SPARK-17630:
--

[~zsxwing] thanks much. any pointers on how/where to add code or something 
existing in the code base to look. I can then try a PR

> jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available 
> for akka
> 
>
> Key: SPARK-17630
> URL: https://issues.apache.org/jira/browse/SPARK-17630
> Project: Spark
>  Issue Type: Question
>  Components: Spark Core
>Affects Versions: 1.6.0
>Reporter: Mario Briggs
> Attachments: SecondCodePath.txt, firstCodepath.txt
>
>
> Hi,
> I have 2 code-paths from my app that result in a jvm OOM. 
> In the first code path, 'akka.jvm-exit-on-fatal-error' kicks in and shuts 
> down the JVM, so that the caller (py4J) get notified with proper stack trace. 
> Attached stack-trace file (firstCodepath.txt)
> In the 2nd code path (rpc.netty), no such handler kicks in and shutdown the 
> JVM, so the caller does not get notified. 
> Attached stack-trace file (SecondCodepath.txt)
> Is it possible to have an jvm exit handle for the rpc. netty path?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17630) jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available for akka

2016-10-18 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586979#comment-15586979
 ] 

Shixiong Zhu commented on SPARK-17630:
--

Yeah, I think we can set up SparkUncaughtExceptionHandler for R and Python 
users.

> jvm-exit-on-fatal-error handler for spark.rpc.netty like there is available 
> for akka
> 
>
> Key: SPARK-17630
> URL: https://issues.apache.org/jira/browse/SPARK-17630
> Project: Spark
>  Issue Type: Question
>  Components: Spark Core
>Affects Versions: 1.6.0
>Reporter: Mario Briggs
> Attachments: SecondCodePath.txt, firstCodepath.txt
>
>
> Hi,
> I have 2 code-paths from my app that result in a jvm OOM. 
> In the first code path, 'akka.jvm-exit-on-fatal-error' kicks in and shuts 
> down the JVM, so that the caller (py4J) get notified with proper stack trace. 
> Attached stack-trace file (firstCodepath.txt)
> In the 2nd code path (rpc.netty), no such handler kicks in and shutdown the 
> JVM, so the caller does not get notified. 
> Attached stack-trace file (SecondCodepath.txt)
> Is it possible to have an jvm exit handle for the rpc. netty path?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org