joellubi commented on issue #1322:
URL: https://github.com/apache/arrow-adbc/issues/1322#issuecomment-1840565785

   Got it. So it does appear likely that your arrow table meets the Snowflake 
Connectors conditions linked above to be optimized for bulk ingestion. For 
context, if those conditions are not met then an even slower approach will be 
taken which renders all the values into the `INSERT` query itself.
   
   Most likely in your case the connector is taking a faster approach of 
uploading the arrow records to a temporary stage and then inserting from there. 
In part, certain limitations on datatypes are imposed because CSV is always 
used for the format of the temporary stage and all types are first converted to 
builtin go types. Since you are using CSV as input, this doesn't appear to be 
an issue.
   
   Given that this code is already making use of the Snowflake Connector's 
optimized ingestion path but still experiencing limitations, I do think it 
makes sense to handle this on the ADBC side rather than delegate to the 
connector. I'm bringing some follow-up discussion on potential solutions to 
#1327.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to