Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/19943
  
    A high-level question: @viirya had a PR to do this by creating a wrapper 
for ORC columnar batch. The parquet data source pick a different approach that 
writes the values to Spark columnar batch.
    
    Generally I think the wrapper approach should be faster on pure scan, but 
may be slower if there is computation after the scan, e.g. aggregate. Do we 
have a benchmark for it?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to