[
https://issues.apache.org/jira/browse/ARROW-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17298920#comment-17298920
]
Joris Van den Bossche commented on ARROW-7224:
----------------------------------------------
bq. FWIW, Spark as has APIs for push-down predicates that allow a source to
tell it which predicates it can be pushed down effectively and which need to be
done as part of the engine (i.e. using compute kernels).
[[email protected]] do you have a (doc) reference for this?
As [~bkietz] mentioned above, a main part of the issue is the "filtering during
construction" vs "filtering during query". Currently you can only provide a
filter when actually querying. But do we want to consider adding a kind of
{{filter}} argument for during construction as well? (in case you know that all
your subsequent queries will use that filter)
> [C++][Dataset] Partition level filters should be able to provide filtering to
> file systems
> ------------------------------------------------------------------------------------------
>
> Key: ARROW-7224
> URL: https://issues.apache.org/jira/browse/ARROW-7224
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++
> Reporter: Micah Kornfield
> Priority: Major
> Labels: dataset
>
> When providing a filter for partitions, it should be possible in some cases
> to use it to optimize file system list calls. This can greatly improve the
> speed for reading data from partitions because fewer number of
> directories/files need to be explored/expanded. I've fallen behind on the
> dataset code, but I want to make sure this issue is tracked someplace. This
> came up in SO question linked below (feel free to correct my analysis if I
> missed the functionality someplace).
> Reference:
> [https://stackoverflow.com/questions/58868584/pyarrow-parquetdataset-read-is-slow-on-a-hive-partitioned-s3-dataset-despite-u/58951477#58951477]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)