[
https://issues.apache.org/jira/browse/SPARK-19094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898114#comment-15898114
]
Kyle Kelley commented on SPARK-19094:
-------------------------------------
Super interested in this, as it's been confusing for our users. I've thought
about making an alternate endpoint for a kernel to get logs out of, it would be
much better to re-route these logs so that the python kernel can handle them
directly.
> Plumb through logging/error messages from the JVM to Jupyter PySpark
> --------------------------------------------------------------------
>
> Key: SPARK-19094
> URL: https://issues.apache.org/jira/browse/SPARK-19094
> Project: Spark
> Issue Type: Improvement
> Components: PySpark
> Reporter: holdenk
> Priority: Trivial
>
> Jupyter/IPython notebooks works by overriding sys.stdout & sys.stderr, as
> such the error messages that show up in IJupyter/IPython are often missing
> the related logs - which is often more useful than the exception its self.
> This could make it easier for Python developers getting started with Spark on
> their local laptops to debug their applications, since otherwise they need to
> remember to keep going to the terminal where they launched the notebook from.
> One counterpoint to this is that Spark's logging is fairly verbose, but since
> we provide the ability for the user to tune the log messages from within the
> notebook that should be OK.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]