[ 
https://issues.apache.org/jira/browse/ARROW-1213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209490#comment-16209490
 ] 

ASF GitHub Bot commented on ARROW-1213:
---------------------------------------

Github user martindurant commented on the issue:

    https://github.com/apache/arrow/pull/916
  
    Great to see arrow and s3fs working together, thanks for looking into it.
    Note that you can also give your credentials via files (typically in 
~/.aws) or environment variables, if you don't want them to be stored within 
your code. Also, if you are on AWS hardware, then credentials should generally 
be available via the IAM service - see the s3fs docs.


> [Python] Enable s3fs to be used with ParquetDataset and reader/writer 
> functions
> -------------------------------------------------------------------------------
>
>                 Key: ARROW-1213
>                 URL: https://issues.apache.org/jira/browse/ARROW-1213
>             Project: Apache Arrow
>          Issue Type: Improvement
>            Reporter: Yacko
>            Assignee: Wes McKinney
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 0.6.0
>
>
> Pyarrow dataset function can't read from s3 using s3fs as the filesystem. Is  
> there a way we can add the support for read from s3 based on partitioned 
> files ?
> I am trying to address the problem mentioned in the stackoverflow link :
> https://stackoverflow.com/questions/45082832/how-to-read-partitioned-parquet-files-from-s3-using-pyarrow-in-python



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to