Re: Spark SQL Dataset and BigDecimal

2021-02-18 Thread Khalid Mammadov
As Scala book says Value types are mapped/transformed to java primitive types. So when you use Integer for example it will compile to int. So Integer is a syntactic sugar and makes it more readable in Scala code than plain int and plus Scala adds extra perks through implicits etc. I think the sa

Re: Spark SQL Dataset and BigDecimal

2021-02-18 Thread Ivan Petrov
I'm fine with both. So does it make sense to use java.math.BigDecimal everywhere to avoid perf penalty for value conversion? scala BigMath looks like a wrapper around java.math.BigDecimal though... чт, 18 февр. 2021 г. в 00:33, Takeshi Yamamuro : > Yea, I think that's because it's needed for inte

Re: Spark SQL Dataset and BigDecimal

2021-02-17 Thread Takeshi Yamamuro
Yea, I think that's because it's needed for interoperability between scala/java. If it returns a scala decimal, java code cannot handle it. If you want a scala decimal, you need to convert it by yourself. Bests, Takeshi On Wed, Feb 17, 2021 at 9:48 PM Ivan Petrov wrote: > Hi, I'm using Spark S

Spark SQL Dataset and BigDecimal

2021-02-17 Thread Ivan Petrov
Hi, I'm using Spark Scala Dataset API to write spark sql jobs. I've noticed that Spark dataset accepts scala BigDecimal as the value but it always returns java.math.BigDecimal when you read it back. Is it by design? Should I use java.math.BigDecimal everywhere instead? Is there any performance pen