[ 
https://issues.apache.org/jira/browse/ARROW-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17452453#comment-17452453
 ] 

Joris Van den Bossche commented on ARROW-14930:
-----------------------------------------------

A few questions to help diagnose the problem. Could you first try to see if the 
filesystem object itself can find the directories/files (so whether the problem 
lies there, or in the dataset code). For example, could you try:

{code:python}
scality.get_file_info("dasynth")
scality.get_file_info("dasynth/parquet/taxies")
{code}

If the parameters for the S3FileSystem are correct, it should normally be able 
to give some basic information about the bucket.

> [Python] FileNotFound when using bucket+folders in S3 + partitioned parquet
> ---------------------------------------------------------------------------
>
>                 Key: ARROW-14930
>                 URL: https://issues.apache.org/jira/browse/ARROW-14930
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 6.0.1
>         Environment: linux + python 3.8
>            Reporter: Luis Morales
>            Priority: Trivial
>             Fix For: 6.0.2
>
>
> When using dataset.Dataset with S3FileSystem with compatible S3 object 
> sotrage, get an FileNotFoundError.
>  
> My code:
>  
> scality = fs.S3FileSystem(access_key='accessKey1', 
> secret_key='verySecretKey1', endpoint_override="http://localhost:8000";, 
> region="")
> data = ds.dataset("dasynth/parquet/taxies/2019_june/", format="parquet", 
> partitioning="hive", filesystem=scality)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to