vleslief-ms opened a new pull request, #1274:
URL: https://github.com/apache/arrow-adbc/pull/1274

   Adding tests which do a simple insert, select and delete for the numeric 
data types in Snowflake, to validate that they can be added and read correctly 
through ADBC. The target values are usually problematic for reading and 
writing, such as infinity, min and max values and NaN, but some more standard 
values are included for sanity.
   
   Tests structure:
   
   1. Create a table with a single column of the chosen numeric data type. 
Snowflake has effectively two: NUMBER and FLOAT, with NUMBER having variations 
depending on the precision and scale. The table will use the config specified 
catalog and schema, but the table itself will be a generated name to avoid 
collisions or overwriting other data.
   2. A single value of the correct type is inserted into the table.
   3. The value is queried from the table, and validated for correctness.
   4. The row is deleted from the table, using the value in the WHERE clause.
   5. The table is dropped.
   
   As of writing, any test that uses NUMBER with a precision and scale other 
than (38,0) (the default) currently fails. These should start passing once 
[#1267](https://github.com/apache/arrow-adbc/pull/1267) is merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to