This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 99f33ec  [SPARK-32234][FOLLOWUP][SQL] Update the description of 
utility method
99f33ec is described below

commit 99f33ec30f04bb0f7b09c3c2abfc5d5b6af50599
Author: SaurabhChawla <s.saurabh...@gmail.com>
AuthorDate: Mon Jul 27 08:14:02 2020 +0000

    [SPARK-32234][FOLLOWUP][SQL] Update the description of utility method
    
    ### What changes were proposed in this pull request?
    As the part of this PR https://github.com/apache/spark/pull/29045 added the 
helper method. This PR is the FOLLOWUP PR to update the description of helper 
method.
    
    ### Why are the changes needed?
    For better readability and understanding of the code
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    Since its only change of updating the description , So ran the Spark shell
    
    Closes #29232 from SaurabhChawla100/SPARK-32234-Desc.
    
    Authored-by: SaurabhChawla <s.saurabh...@gmail.com>
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
---
 .../spark/sql/execution/datasources/orc/OrcUtils.scala     | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
index e102539..072e670 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
@@ -207,10 +207,16 @@ object OrcUtils extends Logging {
   }
 
   /**
-   * @return Returns the result schema string based on the canPruneCols flag.
-   *         resultSchemaString will be created using resultsSchema in case of
-   *         canPruneCols is true and for canPruneCols as false value
-   *         resultSchemaString will be created using the actual dataSchema.
+   * Returns the result schema to read from ORC file. In addition, It sets
+   * the schema string to 'orc.mapred.input.schema' so ORC reader can use 
later.
+   *
+   * @param canPruneCols Flag to decide whether pruned cols schema is send to 
resultSchema
+   *                     or to send the entire dataSchema to resultSchema.
+   * @param dataSchema   Schema of the orc files.
+   * @param resultSchema Result data schema created after pruning cols.
+   * @param partitionSchema Schema of partitions.
+   * @param conf Hadoop Configuration.
+   * @return Returns the result schema as string.
    */
   def orcResultSchemaString(
       canPruneCols: Boolean,


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to