[ https://issues.apache.org/jira/browse/SPARK-17328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15547102#comment-15547102 ]
Dongjoon Hyun commented on SPARK-17328: --------------------------------------- Thank YOU, [~ja...@japila.pl]! > NPE with EXPLAIN DESCRIBE TABLE > ------------------------------- > > Key: SPARK-17328 > URL: https://issues.apache.org/jira/browse/SPARK-17328 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.1 > Reporter: Jacek Laskowski > Priority: Minor > > With today's build: > {code} > scala> sql("EXPLAIN DESCRIBE TABLE x").show(truncate = false) > INFO SparkSqlParser: Parsing command: EXPLAIN DESCRIBE TABLE x > java.lang.NullPointerException > at > org.apache.spark.sql.execution.command.ExplainCommand.run(commands.scala:104) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) > at > org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) > at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) > at > org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:88) > at > org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:88) > at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182) > at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:62) > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:569) > ... 48 elided > {code} > while the following executes fine: > {code} > scala> sql("describe table x").explain > INFO SparkSqlParser: Parsing command: describe table x > org.apache.spark.sql.catalyst.parser.ParseException: > Unsupported SQL statement > == SQL == > describe table x > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:58) > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53) > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82) > at > org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45) > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53) > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:569) > ... 48 elided > {code} > I think it's related to the condition in > https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala#L262. > If guided I'd like to work on it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org