Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19602#discussion_r192865145
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
---
@@ -46,10 +48,11 @@ class HiveClientSuite(version: String)
val hadoopConf = new Configuration()
hadoopConf.setBoolean(tryDirectSqlKey, tryDirectSql)
val client = buildClient(hadoopConf)
- client
- .runSqlHive("CREATE TABLE test (value INT) PARTITIONED BY (ds INT, h
INT, chunk STRING)")
+ client.runSqlHive("CREATE TABLE test0 (value INT) PARTITIONED BY (ds
INT, h INT, chunk STRING)")
--- End diff --
can we reuse the existing data set to test? e.g. `'ds.cast(LongType) ==
8L`. Then we can reduce the code diff here.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]