[
https://issues.apache.org/jira/browse/SPARK-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609065#comment-14609065
]
Neal McBurnett commented on SPARK-2141:
---------------------------------------
The Jupyter use case makes sense to me, as does a situation where multiple
users share an sc, as with Databricks notebooks.
Note that this would be handy for working around
https://issues.apache.org/jira/browse/SPARK-8707 "RDD#toDebugString fails if
any cached RDD has invalid partitions".
> Add sc.getPersistentRDDs() to PySpark
> -------------------------------------
>
> Key: SPARK-2141
> URL: https://issues.apache.org/jira/browse/SPARK-2141
> Project: Spark
> Issue Type: New Feature
> Components: PySpark
> Affects Versions: 1.0.0
> Reporter: Nicholas Chammas
> Assignee: Kan Zhang
>
> PySpark does not appear to have {{sc.getPersistentRDDs()}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]