joellubi commented on issue #1847:
URL: https://github.com/apache/arrow-adbc/issues/1847#issuecomment-2113468212

   @zeroshade Yes just got a pure go reproduction
   
   With a record reader that produces 1 empty batch and then 10 batches of 100 
(i.e. expecting 1000 rows):
   ```
   joel@Joels-MacBook-Pro-2 20240514-sf-ingest-dropped-rows % go run main.go
   2024/05/15 17:12:24 retained
   2024/05/15 17:12:26 released
   2024/05/15 17:12:26 1000 rows ingested
   joel@Joels-MacBook-Pro-2 20240514-sf-ingest-dropped-rows % go run main.go
   2024/05/15 17:12:33 retained
   2024/05/15 17:12:36 released
   2024/05/15 17:12:36 500 rows ingested
   joel@Joels-MacBook-Pro-2 20240514-sf-ingest-dropped-rows % go run main.go
   2024/05/15 17:12:45 retained
   2024/05/15 17:12:49 released
   2024/05/15 17:12:49 800 rows ingested
   ```
   
   No code changes between those three runs. Also I ran the same ingestion with 
the postgres driver from python and the issue does not reproduce under any 
conditions. This seems to be specific to the snowflake driver itself.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to