VARSHAJOSHY commented on issue #38035:
URL: https://github.com/apache/arrow/issues/38035#issuecomment-1749654777

   Hi,
   Thanks for the reply. Our environment not supporting pyarrow.csv.write_csv. 
Its throwing error 'pyarrow.csv doesnt have attribute write_csv'. I tried a 
couple of other approaches to read a parquet file to csv stream.
   
   1.pyarrow table -> pandas table -> to csv
   2.pyarrow table to tobatch -> to csv
   3.pyarrow table - iterchunks - tocsv
   4.pyarrow table ->dask dataframe >csv
   
   but these are not quite well while dealing with parquet file > 10 GB


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to