huaxingao commented on pull request #29695:
URL: https://github.com/apache/spark/pull/29695#issuecomment-703792100


   > For example, since PostgreSQL can have pretty large scale/precision in 
numeric, I think partial aggregated values can go over the value range of Spark 
decimals. So, I'm not sure that we can keep the same behaviour between 
with/without aggregate pushdown.
   
   If database supports larger precision and scale than Spark, I think we can 
adjust the precision and scale using either `DecimalType.bounded` or 
`DecimalType.adjustPrecisionScale`. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to