LuciferYang commented on code in PR #36616:
URL: https://github.com/apache/spark/pull/36616#discussion_r908054274


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/DataSourceReadBenchmark.scala:
##########
@@ -607,6 +607,38 @@ object DataSourceReadBenchmark extends SqlBasedBenchmark {
     }
   }
 
+  def vectorizedScanPartitionColumnsBenchmark(values: Int, pColumns: Int): 
Unit = {

Review Comment:
   Due to the lack of base, it is not easy to explain the end-to-end 
performance improvement. 
   
   However, before this pr, Parquet and Orc used the same data structure for 
the partition column in VectorizedRead. Therefore, from the results of this 
scenario, Parquet and Orc had the same scan performance before this pr, but 
Parquet was slightly faster than Orc after this pr. 
   
   ref https://github.com/apache/spark/pull/36616#discussion_r882346084 and 
https://github.com/apache/spark/pull/36616#discussion_r882346982
   
   I think after a similar work of Orc, they should have the same performance 
again.
   
   So I added this microbenchmark scenario, if really think it is unnecessary, 
I can delete it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to