[
https://issues.apache.org/jira/browse/ARROW-14930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17455947#comment-17455947
]
Luis Morales commented on ARROW-14930:
--------------------------------------
just as a reference... awsdatawrangler works perfectly, with methods like:
wr.s3.list_objects(path='s3://dasynth/parquet', boto3_session=sesion)
returning objects (parquet files)
for chunk in wr.s3.read_parquet('s3://dasynth/parquet/taxies/2019/',
dataset=True, boto3_session=sesion, use_threads=True, chunked=True):
chunks+=1
my_second_filter = lambda x: True if x["payment_type"].startswith("2") and
x["month_year"].startswith("2019-06") else False
for chunk in wr.s3.read_parquet(path="s3://dasynth/parquet/taxies/2019/",
dataset=True, partition_filter=my_second_filter,
boto3_session=sesion, use_threads=True, chunked=True):
chunks+=1
working properly with filters...
this is what I want to get from pyarrow, as it's going to be a lighter library
and not coupled to AWS open source initiatives.
> [Python] FileNotFound when using bucket+folders in S3 + partitioned parquet
> ---------------------------------------------------------------------------
>
> Key: ARROW-14930
> URL: https://issues.apache.org/jira/browse/ARROW-14930
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 6.0.1
> Environment: linux + python 3.8
> Reporter: Luis Morales
> Priority: Trivial
> Fix For: 6.0.2
>
>
> When using dataset.Dataset with S3FileSystem with compatible S3 object
> sotrage, get an FileNotFoundError.
>
> My code:
>
> scality = fs.S3FileSystem(access_key='accessKey1',
> secret_key='verySecretKey1', endpoint_override="http://localhost:8000",
> region="")
> data = ds.dataset("dasynth/parquet/taxies/2019_june/", format="parquet",
> partitioning="hive", filesystem=scality)
--
This message was sent by Atlassian Jira
(v8.20.1#820001)