klin333 commented on issue #219:
URL: 
https://github.com/apache/arrow-nanoarrow/issues/219#issuecomment-1925957809

   Thank you very much for the detailed response. I apologise for not learning 
about ALTREP before posting. What you said makes perfect sense.
   
   I originally went down this path because I was using DBI and ADBI and 
adbcdrivermanager to read data from Snowflake. Previously I used 
DBI::dbReadTable which converted the nanoarrow batched stream to data.frame, 
taking hours to read 1GB tables. I've now realised the proper solution is to 
use DBI::dbReadTableArrow and keep data in arrow format, which is perfectly 
fine with me (actually more preferable since all i was doing is 
arrow::write_parquet. Now my data fetching plus write parquet via ADBC is 6 
times faster than ODBC, perfect. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to