praveentandra commented on PR #3295:
URL: https://github.com/apache/arrow-adbc/pull/3295#issuecomment-3222010160

   > > > As I understand it, the premise behind the original implementation was 
that not all consumers are able to meaningfully use a decimal128 value. So the 
driver was using the "best possible non-decimal128" type to store the value -- 
with possible loss of precision but no loss of scale.
   > > 
   > > 
   > > Precisely
   > 
   > Right, so given this I think that the original change which simply picked 
`float64` instead of `int64` when the scale was nonzero is a better choice.
   
   That's what I thought about 128 vs 64 and it makes sense, but it has a bug 
as described below. As you may know, snowflake doesn't have a good way to 
reference integers vs decimals as it doesn't retain aliases post table 
creation. I was looking for this flag as a way to come around that problem. 
Unable to use the falg=false setting because of the bug. Below is how the data 
shows up in duckdb after querying from Snowflake via adbc.
   
   D select c_custkey, c_name, c_acctbal from sf_db.tpch_sf1.customer order by 
c_custkey limit 5;
   
   use_high_precision = true - inefficient type at client for integers 
(c_custkey)
   ┌───────────────┬────────────────────┬───────────────┐
   │   C_CUSTKEY   │       C_NAME       │   C_ACCTBAL   │
   │ decimal(38,0) │      varchar       │ decimal(12,2) │
   ├───────────────┼────────────────────┼───────────────┤
   │             1 │ Customer#000000001 │        711.56 │
   │             2 │ Customer#000000002 │        121.65 │
   │             3 │ Customer#000000003 │       7498.12 │
   │             4 │ Customer#000000004 │       2866.83 │
   │             5 │ Customer#000000005 │        794.47 │
   └───────────────┴────────────────────┴───────────────┘
   use_high_precision = false - incorrect type and value at client for decimals 
(c_acctbal)
   ┌───────────┬────────────────────┬─────────────────────┐
   │ C_CUSTKEY │       C_NAME       │      C_ACCTBAL      │
   │   int64   │      varchar       │        int64        │
   ├───────────┼────────────────────┼─────────────────────┤
   │         1 │ Customer#000000001 │ 4649470163769863700 │
   │         2 │ Customer#000000002 │ 4638260774666082714 │
   │         3 │ Customer#000000003 │ 4664966284827552645 │
   │         4 │ Customer#000000004 │ 4658522640913436508 │
   │         5 │ Customer#000000005 │ 4650199447842334966 │
   └───────────┴────────────────────┴─────────────────────┘
   
   Given this behavior, I am not sure if any of the clients are using 
use_high_precision = false. 
   
   Please advise on one of the below:
   - Fix the issue for float64
   - Fix the issue and retain decimal128
   - Leave it as is, lets solve decimal(p, s=0)->int conversion separately


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to