zeroshade commented on issue #3766:
URL: https://github.com/apache/arrow-adbc/issues/3766#issuecomment-3613348670

   > Thanks @zeroshade , I overlooked that part of the discussion. But I would 
have assumed that the driver reads the parquet file, then inspects the metadata 
and makes an informed decision on what the column type should be, and sends 
that information to Snowflake together with the data?
   
   > I assume that currently the driver 'selects' TIMESTAMP_LTZ for 
isAdjustedToUTC=true columns and I think that this behaviour is wrong, let me 
try to lay out why.
   
   
   That's not what happens. The driver writes the Arrow stream to a series of 
Parquet files in parallel and uploads them (in parallel) to a Snowflake "stage" 
(as per best practices according to snowflake). It then executes "COPY INTO" 
queries on Snowflake to create/append to the table from the resulting Parquet 
files. The decision to "select" `TIMESTAMP_LTZ` is being made by Snowflake's 
backend and thus the issue needs to be addressed on the Snowflake side first.
   
   I've escalated this to our Snowflake contacts internally, but feel free to 
reach out to Snowflake on your end too. Once this is addressed on Snowflake's 
side, we can potentially add an option to the driver


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to