thisisnic commented on issue #46716:
URL: https://github.com/apache/arrow/issues/46716#issuecomment-2944058737

   I think the root cause of this can be seen in the PR which added 32 and 64 
bit decimals (https://github.com/apache/arrow/pull/43957).  From that 
description:
   
   >##  Are there any user-facing changes?
   >
   > Currently if a user is using decimal(precision, scale) rather than 
decimal128(precision, scale) they will get a Decimal128Type if the precision is 
<= 38 (max precision for Decimal128) and Decimal256Type if the precision is 
higher. Following the same pattern, this change means that using 
decimal(precision, scale) instead of the specific 
decimal32/decimal64/decimal128/decimal256 functions results in the following 
functionality:
   >
   >    for precisions [1 : 9] => Decimal32Type
   >    for precisions [10 : 18] => Decimal64Type
   >    for precisions [19 : 38] => Decimal128Type
   >    for precisions [39 : 76] => Decimal256Type
   >
   > While many of our tests currently make the assumption that decimal with a 
low precision would be Decimal128 and had to be updated, this may cause an 
initial surprise if users are making the same assumptions.
   
   Perhaps us implementing the Decimal 32 and 64 types in R would fix this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to