[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2021-04-14 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r613037609



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala
##
@@ -268,6 +271,7 @@ class ResolveSessionCatalog(
 // session catalog and the table provider is not v2.
 case c @ CreateTableStatement(
  SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _) =>
+  assertNoNullTypeInSchema(c.tableSchema)

Review comment:
   @LantaoJin do you have time to fix it? I think we can simply remove the 
null type check and add a few tests with both in-memory and hive catalog.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2021-04-14 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r613011264



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala
##
@@ -268,6 +271,7 @@ class ResolveSessionCatalog(
 // session catalog and the table provider is not v2.
 case c @ CreateTableStatement(
  SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _) =>
+  assertNoNullTypeInSchema(c.tableSchema)

Review comment:
   @bart-samwel this makes sense, shall we also support `CREATE TABLE t(c 
VOID)`? Your case seems like CTAS only.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2021-04-13 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r612917435



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala
##
@@ -268,6 +271,7 @@ class ResolveSessionCatalog(
 // session catalog and the table provider is not v2.
 case c @ CreateTableStatement(
  SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _) =>
+  assertNoNullTypeInSchema(c.tableSchema)

Review comment:
   I don't know any database that supports creating tables with null/void 
type column, so this change is not for hive compatibility but for reasonable 
SQL semantic.
   
   I agree this is a breaking change that should be at least put in the 
migration guide. A legacy config can also be added but I can't find a 
reasonable use case for a null type column.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-07-07 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r451248585



##
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##
@@ -2309,6 +2310,126 @@ class HiveDDLSuite
 }
   }
 
+  test("SPARK-20680: Spark-sql do not support for unknown column datatype") {
+withTable("t") {
+  withView("tabUnknownType") {
+hiveClient.runSqlHive("CREATE TABLE t (t1 int)")
+hiveClient.runSqlHive("INSERT INTO t VALUES (3)")
+hiveClient.runSqlHive("CREATE VIEW tabUnknownType AS SELECT NULL AS 
col FROM t")
+checkAnswer(spark.table("tabUnknownType"), Row(null))
+// No exception shows
+val desc = spark.sql("DESC tabUnknownType").collect().toSeq
+assert(desc.contains(Row("col", NullType.simpleString, null)))
+  }
+}
+
+// Forbid CTAS with unknown type
+withTable("t1", "t2", "t3") {
+  val e1 = intercept[AnalysisException] {
+spark.sql("CREATE TABLE t1 USING PARQUET AS SELECT null as null_col")
+  }.getMessage
+  assert(e1.contains("Cannot create tables with unknown type"))
+
+  val e2 = intercept[AnalysisException] {
+spark.sql("CREATE TABLE t2 AS SELECT null as null_col")

Review comment:
   can we use `STORE AS` to create hive table explicitly?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-07-06 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r450158300



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Util.scala
##
@@ -346,4 +346,23 @@ private[sql] object CatalogV2Util {
   }
 }
   }
+
+  def failNullType(dt: DataType): Unit = {
+def containsNullType(dt: DataType): Boolean = dt match {
+  case ArrayType(et, _) => containsNullType(et)
+  case MapType(kt, vt, _) => containsNullType(kt) || containsNullType(vt)
+  case StructType(fields) => fields.exists(f => 
containsNullType(f.dataType))
+  case _ => dt.isInstanceOf[NullType]
+}
+if (containsNullType(dt)) {
+  throw new AnalysisException(
+"Cannot create tables with VOID type.")

Review comment:
   Let's go with `unknown` then





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-07-02 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448836271



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Util.scala
##
@@ -346,4 +346,23 @@ private[sql] object CatalogV2Util {
   }
 }
   }
+
+  def failNullType(dt: DataType): Unit = {
+def containsNullType(dt: DataType): Boolean = dt match {
+  case ArrayType(et, _) => containsNullType(et)
+  case MapType(kt, vt, _) => containsNullType(kt) || containsNullType(vt)
+  case StructType(fields) => fields.exists(f => 
containsNullType(f.dataType))
+  case _ => dt.isInstanceOf[NullType]
+}
+if (containsNullType(dt)) {
+  throw new AnalysisException(
+"Cannot create tables with VOID type.")

Review comment:
   I'm in favor of UNKNOWN/unknown, as it indicates it's not a real data 
type. But I'm open to other options. cc @gatorsmile @HyukjinKwon @maropu 
@viirya @bart-samwel 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-07-01 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448201936



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Util.scala
##
@@ -346,4 +346,23 @@ private[sql] object CatalogV2Util {
   }
 }
   }
+
+  def failNullType(dt: DataType): Unit = {
+def containsNullType(dt: DataType): Boolean = dt match {
+  case ArrayType(et, _) => containsNullType(et)
+  case MapType(kt, vt, _) => containsNullType(kt) || containsNullType(vt)
+  case StructType(fields) => fields.exists(f => 
containsNullType(f.dataType))
+  case _ => dt.isInstanceOf[NullType]
+}
+if (containsNullType(dt)) {
+  throw new AnalysisException(
+"Cannot create tables with VOID type.")

Review comment:
   AFAIK some databases use UNKNOWN to represent the null type. Maybe 
UNKNOWN is better as it's not a data type so we don't need to document it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-07-01 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448154450



##
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##
@@ -2309,6 +2309,108 @@ class HiveDDLSuite
 }
   }
 
+  test("SPARK-20680: Spark-sql do not support for void column datatype") {
+withTable("t") {
+  withView("tabVoidType") {
+val client =
+  
spark.sharedState.externalCatalog.unwrapped.asInstanceOf[HiveExternalCatalog].client
+client.runSqlHive("CREATE TABLE t (t1 int)")
+client.runSqlHive("INSERT INTO t VALUES (3)")
+client.runSqlHive("CREATE VIEW tabVoidType AS SELECT NULL AS col FROM 
t")
+checkAnswer(spark.table("tabVoidType"), Row(null))
+// No exception shows
+val desc = spark.sql("DESC tabVoidType").collect().toSeq
+assert(desc.contains(Row("col", "null", null)))

Review comment:
   I mean `DataType.simpleString`.
   
   I think it looks better if DESC TABLE returns `Row("col", "void", null)` 
here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448115666



##
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##
@@ -2309,6 +2309,108 @@ class HiveDDLSuite
 }
   }
 
+  test("SPARK-20680: Spark-sql do not support for void column datatype") {
+withTable("t") {
+  withView("tabVoidType") {
+val client =
+  
spark.sharedState.externalCatalog.unwrapped.asInstanceOf[HiveExternalCatalog].client
+client.runSqlHive("CREATE TABLE t (t1 int)")
+client.runSqlHive("INSERT INTO t VALUES (3)")
+client.runSqlHive("CREATE VIEW tabVoidType AS SELECT NULL AS col FROM 
t")
+checkAnswer(spark.table("tabVoidType"), Row(null))
+// No exception shows
+val desc = spark.sql("DESC tabVoidType").collect().toSeq
+assert(desc.contains(Row("col", "null", null)))
+  }
+}
+
+// Forbid CTAS with null type
+withTable("t1", "t2", "t3") {
+  val e1 = intercept[AnalysisException] {
+spark.sql("CREATE TABLE t1 USING PARQUET AS SELECT null as null_col")
+  }.getMessage
+  assert(e1.contains("Cannot create tables with VOID type"))
+
+  val e2 = intercept[AnalysisException] {
+spark.sql("CREATE TABLE t2 AS SELECT null as null_col")
+  }.getMessage
+  assert(e2.contains("Cannot create tables with VOID type"))
+
+  val e3 = intercept[AnalysisException] {
+spark.sql("CREATE TABLE t3 STORED AS PARQUET AS SELECT null as 
null_col")
+  }.getMessage
+  assert(e3.contains("Cannot create tables with VOID type"))
+}
+
+// Forbid creating table with void/null type in Spark
+Seq("void", "null").foreach { colType =>
+  withTable("t1", "t2", "t3") {
+val e1 = intercept[AnalysisException] {
+  spark.sql(s"CREATE TABLE t1 (v $colType) USING parquet")
+}.getMessage
+assert(e1.contains("Cannot create tables with VOID type"))
+val e2 = intercept[AnalysisException] {
+  spark.sql(s"CREATE TABLE t2 (v $colType) USING hive")

Review comment:
   can we follow the CTAS test and use `STORED AS PARQUET`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448111898



##
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##
@@ -2309,6 +2309,108 @@ class HiveDDLSuite
 }
   }
 
+  test("SPARK-20680: Spark-sql do not support for void column datatype") {
+withTable("t") {
+  withView("tabVoidType") {
+val client =
+  
spark.sharedState.externalCatalog.unwrapped.asInstanceOf[HiveExternalCatalog].client
+client.runSqlHive("CREATE TABLE t (t1 int)")
+client.runSqlHive("INSERT INTO t VALUES (3)")
+client.runSqlHive("CREATE VIEW tabVoidType AS SELECT NULL AS col FROM 
t")
+checkAnswer(spark.table("tabVoidType"), Row(null))
+// No exception shows
+val desc = spark.sql("DESC tabVoidType").collect().toSeq
+assert(desc.contains(Row("col", "null", null)))

Review comment:
   shall we change `NullType.toString` to use void? to match the parser 
side.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448111771



##
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##
@@ -2309,6 +2309,108 @@ class HiveDDLSuite
 }
   }
 
+  test("SPARK-20680: Spark-sql do not support for void column datatype") {
+withTable("t") {
+  withView("tabVoidType") {
+val client =
+  
spark.sharedState.externalCatalog.unwrapped.asInstanceOf[HiveExternalCatalog].client
+client.runSqlHive("CREATE TABLE t (t1 int)")
+client.runSqlHive("INSERT INTO t VALUES (3)")
+client.runSqlHive("CREATE VIEW tabVoidType AS SELECT NULL AS col FROM 
t")

Review comment:
   shall we check TABLE as well instead of only VIEW?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r44876



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
##
@@ -292,6 +293,8 @@ case class PreprocessTableCreation(sparkSession: 
SparkSession) extends Rule[Logi
   "in the table definition of " + table.identifier,
   sparkSession.sessionState.conf.caseSensitiveAnalysis)
 
+assertNoNullTypeInSchema(schema)

Review comment:
   Is this needed? I think the changes in `ResolveCatalogs` and 
`ResolveSessionCatalog` should cover all the commands.

##
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala
##
@@ -106,7 +107,7 @@ class ResolveHiveSerdeTable(session: SparkSession) extends 
Rule[LogicalPlan] {
   } else {
 withStorage
   }
-
+  assertNoNullTypeInSchema(withSchema.schema)

Review comment:
   ditto





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448110946



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala
##
@@ -270,6 +275,7 @@ class ResolveSessionCatalog(
  SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _) =>
   val provider = c.provider.getOrElse(conf.defaultDataSourceName)
   if (!isV2Provider(provider)) {
+assertNoNullTypeInSchema(c.tableSchema)

Review comment:
   ditto, this check can be done at the beginning.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448110841



##
File path: 
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala
##
@@ -102,6 +105,7 @@ class ResolveSessionCatalog(
  nameParts @ SessionCatalogAndTable(catalog, tbl), _, _, _, _, _) =>
   loadTable(catalog, tbl.asIdentifier).collect {
 case v1Table: V1Table =>
+  a.dataType.foreach(failNullType)

Review comment:
   this can be done before the `loadTable` call.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448110622



##
File path: sql/catalyst/src/main/scala/org/apache/spark/sql/types/NullType.scala
##
@@ -38,4 +38,12 @@ class NullType private() extends DataType {
  * @since 1.3.0
  */
 @Stable
-case object NullType extends NullType
+case object NullType extends NullType {
+
+  def containsNullType(dt: DataType): Boolean = dt match {

Review comment:
   let's not add a new method to a stable public class. Can we put it in 
the method `failNullType`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448110440



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Util.scala
##
@@ -346,4 +346,17 @@ private[sql] object CatalogV2Util {
   }
 }
   }
+
+  def failNullType(dt: DataType): Unit = {
+if (NullType.containsNullType(dt)) {
+  throw new AnalysisException(
+"Cannot create tables with VOID type.")
+}
+  }
+
+  def assertNoNullTypeInSchema(schema: StructType): Unit = {
+schema.foreach { f =>
+  failNullType(CatalystSqlParser.parseDataType(schema.catalogString))

Review comment:
   shouldn't this be `failNullType(f.dataType)`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #28833: [SPARK-20680][SQL] Spark-sql do not support for creating table with void column datatype

2020-06-30 Thread GitBox


cloud-fan commented on a change in pull request #28833:
URL: https://github.com/apache/spark/pull/28833#discussion_r448110252



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
##
@@ -2211,6 +2211,8 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
 DecimalType(precision.getText.toInt, 0)
   case ("decimal" | "dec" | "numeric", precision :: scale :: Nil) =>
 DecimalType(precision.getText.toInt, scale.getText.toInt)
+  case ("void", Nil) => NullType
+  case ("null", Nil) => NullType

Review comment:
   I'm not sure about this. `null` is also a literal syntax, and this may 
introduce ambiguity if `null` is also a type name.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org