beliefer commented on code in PR #37915:
URL: https://github.com/apache/spark/pull/37915#discussion_r977133079
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala:
##########
@@ -28,52 +28,35 @@ import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.unsafe.types.UTF8String
/**
- * A mutable implementation of BigDecimal that can hold a Long if values are
small enough.
- *
- * The semantics of the fields are as follows:
- * - _precision and _scale represent the SQL precision and scale we are
looking for
- * - If decimalVal is set, it represents the whole decimal value
- * - Otherwise, the decimal value is longVal / (10 ** _scale)
- *
- * Note, for values between -1.0 and 1.0, precision digits are only counted
after dot.
+ * A mutable implementation of BigDecimal that hold a `DecimalOperation`.
*/
@Unstable
-final class Decimal extends Ordered[Decimal] with Serializable {
+final class Decimal(initEnabled: Boolean = true) extends Ordered[Decimal] with
Serializable {
import org.apache.spark.sql.types.Decimal._
- private var decimalVal: BigDecimal = null
- private var longVal: Long = 0L
- private var _precision: Int = 1
- private var _scale: Int = 0
+ private var decimalOperation: DecimalOperation[_] = null
Review Comment:
`decimalOperation` is not null in default. To reduce the overhead of
operations (e.g. +), let it is null.
I tested the benchmark, if give a default `DecimalOperation` value, these
math operations (e.g. +) have 5x performance overhead.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]