tgravescs commented on PR #45157:
URL: https://github.com/apache/spark/pull/45157#issuecomment-1954460818

   sorry, I've been ooo and busy with other things.  I didn't fully follow the 
discussion in https://github.com/apache/spark/pull/44690.
   
   Can you two please summarize exactly why are we changing this?  Is there 
actually some case the previous method did not handle?  Yes I understand there 
could be some precision loss when dealing with floats/double but I think that 
is going to come from the API.
   
   Is this purely for readability at this point?
   
   I think this goes from storing longs to storing BigDecimals, which is not 
going to be more memory efficient and I thought BigDecimal was fairly slow at 
operations, but maybe Java has made that better.  There obviously was some 
conversion going on before with toInternalResource so some overhead there if 
this makes things more efficient.  Doing some very poor many testing in a scala 
shell creating bigdecimal takes around 30,000ns and doing the conversions is 
around 6,000ns.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to