[
https://issues.apache.org/jira/browse/ARROW-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16085749#comment-16085749
]
Wes McKinney commented on ARROW-1213:
-------------------------------------
Yes, of course -- there is already ARROW-1119 for providing Python glue for
reading partition listings from S3 and at some point we will implement a more
optimized IO interface to S3 blobs (ARROW-453), but we can use the existing
s3fs implementation for now. Do you have any time to work on this? I believe
the S3FileSystem needs to be wrapped in a shim class to expose the common
filesystem API:
https://github.com/apache/arrow/blob/master/python/pyarrow/filesystem.py
Please report issues or feature requests here instead of StackOverflow (for
example, I have stopped watching SO many years ago). If you have questions
about the project or something you would like to discuss, please use
[email protected]
> can we add support for pyarrow to read from s3 based on partitions
> ------------------------------------------------------------------
>
> Key: ARROW-1213
> URL: https://issues.apache.org/jira/browse/ARROW-1213
> Project: Apache Arrow
> Issue Type: Improvement
> Reporter: Yacko
> Priority: Minor
>
> Pyarrow dataset function can't read from s3 using s3fs as the filesystem. Is
> there a way we can add the support for read from s3 based on partitioned
> files ?
> I am trying to address the problem mentioned in the stackoverflow link :
> https://stackoverflow.com/questions/45082832/how-to-read-partitioned-parquet-files-from-s3-using-pyarrow-in-python
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)