Github user tejasapatil commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13775#discussion_r83764569
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala ---
    @@ -131,31 +138,43 @@ class OrcFileFormat extends FileFormat with 
DataSourceRegister with Serializable
             val physicalSchema = maybePhysicalSchema.get
             OrcRelation.setRequiredColumns(conf, physicalSchema, 
requiredSchema)
     
    -        val orcRecordReader = {
    -          val job = Job.getInstance(conf)
    -          FileInputFormat.setInputPaths(job, file.filePath)
    -
    -          val fileSplit = new FileSplit(
    -            new Path(new URI(file.filePath)), file.start, file.length, 
Array.empty
    -          )
    -          // Custom OrcRecordReader is used to get
    -          // ObjectInspector during recordReader creation itself and can
    -          // avoid NameNode call in unwrapOrcStructs per file.
    -          // Specifically would be helpful for partitioned datasets.
    -          val orcReader = OrcFile.createReader(
    -            new Path(new URI(file.filePath)), OrcFile.readerOptions(conf))
    -          new SparkOrcNewRecordReader(orcReader, conf, fileSplit.getStart, 
fileSplit.getLength)
    +        val job = Job.getInstance(conf)
    +        FileInputFormat.setInputPaths(job, file.filePath)
    +
    +        val fileSplit = new FileSplit(
    +          new Path(new URI(file.filePath)), file.start, file.length, 
Array.empty
    +        )
    +        // Custom OrcRecordReader is used to get
    +        // ObjectInspector during recordReader creation itself and can
    +        // avoid NameNode call in unwrapOrcStructs per file.
    +        // Specifically would be helpful for partitioned datasets.
    +        val orcReader = OrcFile.createReader(
    +          new Path(new URI(file.filePath)), OrcFile.readerOptions(conf))
    +
    +        if (enableVectorizedReader) {
    +          val conf = job.getConfiguration.asInstanceOf[JobConf]
    --- End diff --
    
    why can't you reuse the `conf` at line 129 
(https://github.com/apache/spark/pull/13775/files#diff-01999ccbf13e95a0ea2d223f69d8ae23R129)
 ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to