[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-23 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r250441717
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -288,16 +285,26 @@ class HadoopTableReader(
 }
   }
 
+  /**
+   * The entry of creating a RDD.
 
 Review comment:
   Can you describe more about the motivation of this pr? I think this comment 
is a little meaningless 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-21 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249419838
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -311,6 +318,30 @@ class HadoopTableReader(
 // Only take the value (skip the key) because Hive works only with values.
 rdd.map(_._2)
   }
+
+  /**
+   * Creates a NewHadoopRDD based on the broadcasted HiveConf and other job 
properties that will be
+   * applied locally on each slave.
+   */
+  private def createNewHadoopRdd(tableDesc: TableDesc, path: String): 
RDD[Writable] = {
+
 
 Review comment:
   super nit: Remove this blank line


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249316353
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveTableScanSuite.scala
 ##
 @@ -192,4 +192,44 @@ class HiveTableScanSuite extends HiveComparisonTest with 
SQLTestUtils with TestH
   case p: HiveTableScanExec => p
 }.get
   }
+
+  test("[SPARK-26630] Fix ClassCastException in TableReader while creating 
HadoopRDD") {
+withTable("table_old", "table_pt_old", "table_new", "table_pt_new") {
+  sql(
+s"""
+   |CREATE TABLE table_old (id int)
+   |STORED AS
+   |INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
+   |OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
+ """.stripMargin)
+  sql(
+s"""
+   |CREATE TABLE table_pt_old (id int)
+   |PARTITIONED BY (a int, b int)
+   |STORED AS
+   |INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
+   |OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
+ """.stripMargin)
+  sql(
+s"""
+   |CREATE TABLE table_new (id int)
+   |STORED AS
+   |INPUTFORMAT 'org.apache.hadoop.mapreduce.lib.input.TextInputFormat'
+   |OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
+   """.stripMargin)
+  sql(
+s"""
+   |CREATE TABLE table_pt_new (id int)
+   |PARTITIONED BY (a int, b int)
+   |STORED AS
+   |INPUTFORMAT 'org.apache.hadoop.mapreduce.lib.input.TextInputFormat'
+   |OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
+   """.stripMargin)
+
+  sql("SELECT count(1) FROM table_old").show()
+  sql("SELECT count(1) FROM table_pt_old").show()
+  sql("SELECT count(1) FROM table_new").show()
+  sql("SELECT count(1) FROM table_pt_new").show()
+}
 
 Review comment:
   plz don't use `.show()` for tests and plz assert something...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249316225
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -288,16 +286,29 @@ class HadoopTableReader(
 }
   }
 
+  /**
+   * The entry of creating a RDD.
+   */
+  private def createHadoopRDD(
+  inputClassName: String, localTableDesc: TableDesc, inputPathStr: 
String): RDD[Writable] = {
+if (classOf[org.apache.hadoop.mapreduce.InputFormat[_, _]]
+  .isAssignableFrom(Utils.classForName(inputClassName))) {
+  createNewHadoopRdd(localTableDesc, inputPathStr, inputClassName)
+} else {
+  createOldHadoopRdd(localTableDesc, inputPathStr, inputClassName)
+}
+  }
+
   /**
* Creates a HadoopRDD based on the broadcasted HiveConf and other job 
properties that will be
* applied locally on each slave.
*/
-  private def createHadoopRdd(
-tableDesc: TableDesc,
-path: String,
-inputFormatClass: Class[InputFormat[Writable, Writable]]): RDD[Writable] = 
{
+  private def createOldHadoopRdd(
+  tableDesc: TableDesc, path: String, inputClassName: String): 
RDD[Writable] = {
 
 val initializeJobConfFunc = 
HadoopTableReader.initializeLocalJobConfFunc(path, tableDesc) _
+val inputFormatClass = Utils.classForName(inputClassName)
+  
.asInstanceOf[java.lang.Class[org.apache.hadoop.mapred.InputFormat[Writable, 
Writable]]]
 
 Review comment:
   
`partDesc.getInputFileFormatClass.asInstanceOf[org.apache.hadoop.mapred.InputFormat[Writable,
 Writable]]`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249316244
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -311,6 +322,31 @@ class HadoopTableReader(
 // Only take the value (skip the key) because Hive works only with values.
 rdd.map(_._2)
   }
+
+  /**
+   * Creates a NewHadoopRDD based on the broadcasted HiveConf and other job 
properties that will be
+   * applied locally on each slave.
+   */
+  private def createNewHadoopRdd(
+  tableDesc: TableDesc, path: String, inputClassName: String): 
RDD[Writable] = {
+
+val newJobConf = new JobConf(hadoopConf)
+HadoopTableReader.initializeLocalJobConfFunc(path, tableDesc)(newJobConf)
+val inputFormatClass = Utils.classForName(inputClassName)
+  
.asInstanceOf[java.lang.Class[org.apache.hadoop.mapreduce.InputFormat[Writable, 
Writable]]]
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249315921
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -311,6 +322,31 @@ class HadoopTableReader(
 // Only take the value (skip the key) because Hive works only with values.
 rdd.map(_._2)
   }
+
+  /**
+   * Creates a NewHadoopRDD based on the broadcasted HiveConf and other job 
properties that will be
+   * applied locally on each slave.
+   */
+  private def createNewHadoopRdd(
+  tableDesc: TableDesc, path: String, inputClassName: String): 
RDD[Writable] = {
 
 Review comment:
   nit: 
   ```
   private def createNewHadoopRdd(
 tableDesc: TableDesc,
 path: String,
 inputClassName: String): RDD[Writable] = {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249315479
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -289,15 +287,28 @@ class HadoopTableReader(
   }
 
   /**
-   * Creates a HadoopRDD based on the broadcasted HiveConf and other job 
properties that will be
+   * The entry of creating a RDD.
+   */
+  private def createHadoopRDD(
+  inputClassName: String, localTableDesc: TableDesc, inputPathStr: 
String): RDD[Writable] = {
+if (classOf[org.apache.hadoop.mapreduce.InputFormat[_, _]]
 
 Review comment:
   Probably, @srowen meant you'd be better to just write...;
   ```
 // Create local references so that the outer object isn't serialized.
 val localTableDesc = tableDesc
 partDesc.getInputFileFormatClass match {
   case c: Class[_] if c == 
classOf[org.apache.hadoop.mapreduce.InputFormat[_, _]] =>
 createNewHadoopRdd(...)
   case _ =>
 createOldadoopRdd(...)
 }
 ...
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix ClassCastException in TableReader while creating HadoopRDD

2019-01-20 Thread GitBox
maropu commented on a change in pull request #23559: [SPARK-26630][SQL] Fix 
ClassCastException in TableReader while creating HadoopRDD
URL: https://github.com/apache/spark/pull/23559#discussion_r249312842
 
 

 ##
 File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
 ##
 @@ -31,12 +31,12 @@ import org.apache.hadoop.hive.serde2.Deserializer
 import 
org.apache.hadoop.hive.serde2.objectinspector.{ObjectInspectorConverters, 
StructObjectInspector}
 import org.apache.hadoop.hive.serde2.objectinspector.primitive._
 import org.apache.hadoop.io.Writable
-import org.apache.hadoop.mapred.{FileInputFormat, InputFormat, JobConf}
+import org.apache.hadoop.mapred.{FileInputFormat, JobConf}
 
 import org.apache.spark.broadcast.Broadcast
 import org.apache.spark.deploy.SparkHadoopUtil
 import org.apache.spark.internal.Logging
-import org.apache.spark.rdd.{EmptyRDD, HadoopRDD, RDD, UnionRDD}
+import org.apache.spark.rdd._
 
 Review comment:
   nit: plz unfold this imports.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org