davlee1972 commented on issue #2041:
URL: https://github.com/apache/arrow-adbc/issues/2041#issuecomment-2261528517

   I got 1.1.0 version to work. I think I had a combination of problems between 
quoted identifiers for column and table names so csv got triggered.
   
   There are some challenging behaviors I had to work around since python is 
case sensitive.
   
   1. A user requests columns in lower case, Snowflake upper cases columns in 
the sql and returns back upper cased columns in results which do not match the 
columns originally requested. I had to modify the sql to inject aliases like 
select a as "a" for everything for any type of data pull.
   
   2. The copy into the command has a case insensitive option for column 
matching, but the table name passed into adbc ingest does not. I had to add 
some pre checking to figure correct table name to use.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to