HyukjinKwon commented on a change in pull request #26970: 
[SPARK-28825][SQL][DOC] Documentation for Explain Command
URL: https://github.com/apache/spark/pull/26970#discussion_r360743650
 
 

 ##########
 File path: docs/sql-ref-syntax-qry-explain.md
 ##########
 @@ -19,4 +19,157 @@ license: |
   limitations under the License.
 ---
 
-**This page is under construction**
+### Description
+
+The `EXPLAIN` statement provides the execution plan for the statement. 
+By default, `EXPLAIN` provides information about the physical plan.
+`EXPLAIN` does not support 'DESCRIBE TABLE' statement.
+
+
+### Syntax
+{% highlight sql %}
+EXPLAIN [EXTENDED | CODEGEN] statement
+{% endhighlight %}
+
+### Parameters
+
+<dl>
+  <dt><code><em>EXTENDED</em></code></dt>
+  <dd>Generates Parsed Logical Plan, Analyzed Logical Plan, Optimized Logical 
Plan and Physical Plan.</dd>
+</dl> 
+
+<dl>
+  <dt><code><em>CODEGEN</em></code></dt>
+  <dd>Generates code for the statement, if any.</dd>
+</dl>
+
+### Examples
+{% highlight sql %}
+
+--Using Extended
+
+EXPLAIN EXTENDED select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Parsed Logical Plan ==
+'Project [*]
++- 'UnresolvedRelation [emp]
+
+== Analyzed Logical Plan ==
+id: int
+Project [id#0]
++- SubqueryAlias `default`.`emp`
+   +- Relation[id#0] parquet
+
+== Optimized Logical Plan ==
+Relation[id#0] parquet
+
+== Physical Plan ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#0] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+ |
++----------------------------------------------------+
+
+--Default Output
+
+EXPLAIN select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Physical Plan ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#0] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+
+ |
++----------------------------------------------------+
+
+
+-- Using CODEGEN
+
+EXPLAIN CODEGEN select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| Found 1 WholeStageCodegen subtrees.
+== Subtree 1 / 1 (maxMethodCodeSize:192; maxConstantPoolSize:127(0.19% used); 
numInnerClasses:0) ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#0] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+
+Generated code:
 
 Review comment:
   Yeah, I don't think we should write down generated codes in the docs too. +1 
for @maropu's advice. We can just show a couple of simple examples and that's 
it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to