Repository: spark
Updated Branches:
  refs/heads/master 445a755b8 -> 3df85dccb


[SPARK-5871] output explain in Python

Author: Davies Liu <dav...@databricks.com>

Closes #4658 from davies/explain and squashes the following commits:

db87ea2 [Davies Liu] output explain in Python


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3df85dcc
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3df85dcc
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3df85dcc

Branch: refs/heads/master
Commit: 3df85dccbc8fd1ba19bbcdb8d359c073b1494d98
Parents: 445a755
Author: Davies Liu <dav...@databricks.com>
Authored: Tue Feb 17 13:48:38 2015 -0800
Committer: Michael Armbrust <mich...@databricks.com>
Committed: Tue Feb 17 13:48:38 2015 -0800

----------------------------------------------------------------------
 python/pyspark/sql/dataframe.py | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/3df85dcc/python/pyspark/sql/dataframe.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index 8417240..388033d 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -244,8 +244,25 @@ class DataFrame(object):
         debugging purpose.
 
         If extended is False, only prints the physical plan.
-        """
-        self._jdf.explain(extended)
+
+        >>> df.explain()
+        PhysicalRDD [age#0,name#1], MapPartitionsRDD[...] at mapPartitions at 
SQLContext.scala:...
+
+        >>> df.explain(True)
+        == Parsed Logical Plan ==
+        ...
+        == Analyzed Logical Plan ==
+        ...
+        == Optimized Logical Plan ==
+        ...
+        == Physical Plan ==
+        ...
+        == RDD ==
+        """
+        if extended:
+            print self._jdf.queryExecution().toString()
+        else:
+            print self._jdf.queryExecution().executedPlan().toString()
 
     def isLocal(self):
         """
@@ -1034,7 +1051,7 @@ def _test():
                                   Row(name='Bob', age=5, height=85)]).toDF()
     (failure_count, test_count) = doctest.testmod(
         pyspark.sql.dataframe, globs=globs,
-        optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE)
+        optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | 
doctest.REPORT_NDIFF)
     globs['sc'].stop()
     if failure_count:
         exit(-1)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to