wperron opened a new issue, #3668:
URL: https://github.com/apache/arrow-datafusion/issues/3668

   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   Hey folks :wave: we're evaluating Datafusion/Ballista to perform queries as 
part of a pipeline on some pretty large datasets (we're talking on the order of 
a few Terabytes at minimum) and would like to read from Parquet files from GCS 
and write as Parquet to a different GCS bucket. Right now it's possible to get 
a DataFrame from GCS using the object_store crate, but it's not possible to 
write the resulting DataFrame back to GCS.
   
   **Additional context**
   This feature was already discussed in #2185 but that issue is out of date 
because the new object_store crate has since been merged in.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to