Dandandan commented on issue #725:
URL: 
https://github.com/apache/arrow-datafusion/issues/725#issuecomment-881024855


   > > I added some form of limit push down to parquet some time ago.
   > > Might be that it isn't applied to your dataset somehow? Or maybe getting 
the metadata / statistics itself might be slow?
   > > [apache/arrow#9672](https://github.com/apache/arrow/pull/9672)
   > 
   > I tried generating another file using 
https://gist.github.com/Jimexist/82717bc3ef32a366e11ef60e6e876fcc and it turns 
out that limit indeed works. that's 6405008 rows and `select * from table limit 
10` returns within 0.5s, and selecting only one column returns in less than 
0.03s, so i guess that's indeed taking effect.
   > 
   > i guess my original slow case was due to that the parquet file was 
directly pulled from HDFS, in which case the statistics are not working?
   
   Hm that's weird. We don't use the statistics for the limit, but reduce the 
amount scanned per file.
   Would be great to have some reproduction on this. Maybe it has to do with 
large row groups? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to