mykvvv commented on issue #4030:
URL: https://github.com/apache/arrow/issues/4030#issuecomment-796572575


   Hi Guys,
   
   issue is with s3fs library. because same code if you will use in your local 
env. it will work fine. but when you give AWS s3 path and use s3fs library with 
pandas then you will see such issue.
   
   work around is below:
   
   import pyarrow.parquet as pq
   import pyarrow as pa
   from s3fs import S3FileSystem
   import pandas as pd
   
   df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
   s3 = S3FileSystem(key=ACCESS_KEY_ID, secret=SECRET_ACCESS_KEY)
   table = pa.Table.from_pandas(df)
   
   
pq.write_to_dataset(table=table,root_path='s3://bucket-name/key-path',filesystem=s3,compression='snappy',partition_cols
 = ['a']) 
   
   may be this can help others arriving here.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to