[GitHub] spark pull request #21757: [SPARK-24797] [SQL] respect spark.sql.hive.conver...

2018-07-13 Thread CodingCat
Github user CodingCat commented on a diff in the pull request:

https://github.com/apache/spark/pull/21757#discussion_r202417166
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
 ---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession) 
extends Rule[LogicalPlan]
 
   override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
 case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _, 
_, _)
-if DDLUtils.isDatasourceTable(tableMeta) =>
+if DDLUtils.isDatasourceTable(tableMeta) &&
+  DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --

ok


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21757: [SPARK-24797] [SQL] respect spark.sql.hive.conver...

2018-07-13 Thread CodingCat
Github user CodingCat closed the pull request at:

https://github.com/apache/spark/pull/21757


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21757: [SPARK-24797] [SQL] respect spark.sql.hive.conver...

2018-07-13 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/21757#discussion_r202416374
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
 ---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession) 
extends Rule[LogicalPlan]
 
   override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
 case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _, 
_, _)
-if DDLUtils.isDatasourceTable(tableMeta) =>
+if DDLUtils.isDatasourceTable(tableMeta) &&
+  DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --

If you are using `format("parquet")` to create a new table, it will be a 
data source table. We always use the native reader/writer to read/write such a 
table.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21757: [SPARK-24797] [SQL] respect spark.sql.hive.conver...

2018-07-13 Thread CodingCat
Github user CodingCat commented on a diff in the pull request:

https://github.com/apache/spark/pull/21757#discussion_r202414440
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
 ---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession) 
extends Rule[LogicalPlan]
 
   override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
 case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _, 
_, _)
-if DDLUtils.isDatasourceTable(tableMeta) =>
+if DDLUtils.isDatasourceTable(tableMeta) &&
+  DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --

do you mean any table built through df.write.format("..") should be taken 
as a data source table no matter we register it with HMS or not


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21757: [SPARK-24797] [SQL] respect spark.sql.hive.conver...

2018-07-12 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/21757#discussion_r202248225
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
 ---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession) 
extends Rule[LogicalPlan]
 
   override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
 case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _, 
_, _)
-if DDLUtils.isDatasourceTable(tableMeta) =>
+if DDLUtils.isDatasourceTable(tableMeta) &&
+  DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --

I do not think this is a right fix. If the original table is the native 
data source table, we will always use our parquet/orc reader instead of hive 
serde. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org