[
https://issues.apache.org/jira/browse/ARROW-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931542#comment-16931542
]
Krisztian Szucs edited comment on ARROW-5072 at 9/17/19 3:10 PM:
-----------------------------------------------------------------
The root cause originates from s3fs. {{S3File}} uses buffering, and no S3
request happens until it is closed or flushed:
{code:python}
out = S3File(S3FileSystem(), 's3://some-bogus-bucket/df.parquet', mode='wb')
out.write(b'bbb') # returns with 3
del out # raises exception
{code}
If we would call {{out.close()}} then it would be flushed, but the file itself
is opened outside of arrow, so the caller should be responsible for closing it.
We could fall {{flush()}} on file-like objects, although it works a bit
differently with {{S3File}}:
{code:python}
out.flush() # doesn't raise
out.flush(force=True) # raises, but force keyword is S3File / fsspec specific
{code}
When {{S3FileSystem}} is used, then the file object is opened and thus closed
by arrow, so the exception propagates from {{pq.write_table()}}:
{code:python}
pq.write_table(table, s3_filepath, filesystem=S3FileSystem()). # raises
{code}
I'd consider this as an s3fs issue, because {{S3File().write()}} work on
non-existing file (actually I get a non-authorised error, because I don't have
S3 credentials set up), and the error is raised on object destruction, from
{{S3File.__del__()}}.
>From arrow perspective the provided behaviour is the expected, although we can
>have a note for this s3fs case in the documentation.
cc [~wesmckinn] [~pitrou] [~mdurant]
was (Author: kszucs):
The root cause originates from s3fs. {{S3File}} uses buffering, and no S3
request happens until it is closed or flushed:
{code:python}
out = S3File(S3FileSystem(), 's3://some-bogus-bucket/df.parquet', mode='wb')
out.write(b'bbb') # returns with 3
del out # raises exception
{code}
If we would call {{out.close()}} then it would be flushed, but the file itself
is opened outside of arrow, so the caller should be responsible for closing it.
We could fall {{flush()}} on file-like objects, although it works a bit
differently with {{S3File}}:
{code:python}
out.flush() # doesn't raise
out.flush(force=True) # raises, but force keyword is S3File / fsspec specific
{code}
When {{S3FileSystem}} is used, then the file object is opened and thus closed
by arrow, so the exception propagates from {{pq.write_table()}}:
{code:python}
pq.write_table(table, s3_filepath, filesystem=S3FileSystem())
{code}
I'd consider this as an s3fs issue, because {{S3File().write()}} work on
non-existing file (actually I get a non-authorised error, because I don't have
S3 credentials set up), and the error is raised on object destruction, from
{{S3File.__del__()}}.
>From arrow perspective the provided behaviour is the expected, although we can
>have a note for this s3fs case in the documentation.
cc [~wesmckinn] [~pitrou] [~mdurant]
> [Python] write_table fails silently on S3 errors
> ------------------------------------------------
>
> Key: ARROW-5072
> URL: https://issues.apache.org/jira/browse/ARROW-5072
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.12.1
> Environment: Python 3.6.8
> Reporter: Paul George
> Priority: Minor
> Labels: filesystem, parquet
> Fix For: 0.15.0
>
>
> {{pyarrow==0.12.1}}
> *pyarrow.parquet.write_table* called with where=S3File(...) fails silently
> when encountering errors while writing to S3 (in the example below, boto3 is
> raising a NoSuchBucket exception). However, instead of using S3File(),
> calling write_table with where=_<filepath>_ and with
> filesystem=S3FileSystem() does *not* fail silently and raises, as is expected.
> h4. Code/Repro
>
> {code:java}
> import pandas as pd
> import pyarrow as pa
> import pyarrow.parquet as pq
> from s3fs import S3File, S3FileSystem
> df = pd.DataFrame({'col0': []})
> s3_filepath = 's3://some-bogus-bucket/df.parquet'
> print('>> test 1')
> try:
> # use S3File --> fails silently
> pq.write_table(pa.Table.from_pandas(df.copy()),
> S3File(S3FileSystem(), s3_filepath, mode='wb'))
> except Exception:
> print('>>>> Exception raised!')
> else:
> print('>>>> Exception **NOT** raised!')
> print('>> test 2')
> try:
> # use filepath and S3FileSystem --> raises Exception, as expected
> pq.write_table(pa.Table.from_pandas(df.copy()),
> s3_filepath,
> filesystem=S3FileSystem())
> except Exception:
> print('>>>> Exception raised!')
> else:
> print('>>>> Exception **NOT** raised!'){code}
>
> h4.
> h4. Output
> {code:java}
> >> test 1
> Exception ignored in: <bound method S3File.__del__ of <S3File
> some-bogus-bucket/df.parquet>>
> Traceback (most recent call last):
> File "<redacted>/lib/python3.6/site-packages/s3fs/core.py", line 1476, in
> __del__
> self.close()
> File "<redacted>/lib/python3.6/site-packages/s3fs/core.py", line 1454, in
> close
> raise_from(IOError('Write failed: %s' % self.path), e)
> File "<string>", line 3, in raise_from
> OSError: Write failed: some-bogus-bucket/df.parquet
> >>>> Exception **NOT** raised!
> >> test 2
> >>>> Exception raised!
> Exception ignored in: <bound method S3File.__del__ of <S3File
> some-bogus-bucket/df.parquet>>
> Traceback (most recent call last):
> File "<redacted>/lib/python3.6/site-packages/s3fs/core.py", line 1476, in
> __del__
> self.close()
> File "<redacted>/lib/python3.6/site-packages/s3fs/core.py", line 1454, in
> close
> raise_from(IOError('Write failed: %s' % self.path), e)
> File "<string>", line 3, in raise_from
> OSError: Write failed: some-bogus-bucket/df.parquet
> {code}
--
This message was sent by Atlassian Jira
(v8.3.2#803003)