davlee1972 commented on issue #1847:
URL: https://github.com/apache/arrow-adbc/issues/1847#issuecomment-2113304353

   Ok I figured it out.. There is a bug with how adbc_ingest is handling 
batches..
   
   I reduced the number of records to 12,000 and the bug still happens..
   
   If I read data from a partial set of parquet files and try to write it to 
Snowflake I get ZERO rows inserted..
   
   If I write my data to a single parquet file, reread it and then try to write 
it to Snowflake I get all my rows inserted..
   
   
![image](https://github.com/apache/arrow-adbc/assets/24494353/946ae407-9abe-41e5-ad34-81ba5dda1bf8)
   
   On a side not I'm not sure why these params are STRINGs..
   
   
![image](https://github.com/apache/arrow-adbc/assets/24494353/740e69ce-98c4-4f74-879e-547ac779d4c6)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to