davlee1972 commented on PR #2494: URL: https://github.com/apache/arrow-adbc/pull/2494#issuecomment-2682912716
Well it worked for me as well.. Trying additional attempts with triple the amount of data.. My original dataset has less columns but 100 million rows. **The only common pattern I'm seeing is that after 5 consecutive COPY INTO calls with no rows inserted it moves on to run select count(*) and fails..** It just looks like a suspicious coincidence with an error message like: "116 files remain after 5 retries".  yellow_tripdata_2015-01.csv yellow_tripdata_2016-04.csv yellow_tripdata_2015-02.csv yellow_tripdata_2016-05.csv yellow_tripdata_2015-03.csv yellow_tripdata_2016-06.csv yellow_tripdata_2016-01.csv yellow_tripdata_2016-07.csv yellow_tripdata_2016-02.csv yellow_tripdata_2016-08.csv yellow_tripdata_2016-03.csv yellow_tripdata_2016-09.csv Dropped table in between tests: cursor.execute('drop table "taxi_data"') **This bigger test failed on the first try, but succeeded on the second attempt.**  InternalError: INTERNAL: some files not loaded by COPY command, 116 files remain after 5 retries time="2025-02-25T13:00:45-05:00" level=error msg="error: context canceled" func="gosnowflake.(*snowflakeConn).queryContextInternal" file="connection.go:413" time="2025-02-25T13:00:45-05:00" level=error msg="error: context canceled" func="gosnowflake.(*snowflakeConn).queryContextInternal" file="connection.go:413" time="2025-02-25T13:00:45-05:00" level=error msg="error: context canceled" func="gosnowflake.(*snowflakeConn).queryContextInternal" file="connection.go:413"  -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
