attilapiros commented on a change in pull request #26016: [SPARK-24914][SQL] 
New statistic to improve data size estimate for columnar storage formats
URL: https://github.com/apache/spark/pull/26016#discussion_r388358554
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
 ##########
 @@ -439,14 +446,18 @@ case class CatalogStatistics(
     } else {
       // When plan statistics are disabled or the table doesn't have other 
statistics,
       // we apply the size-only estimation strategy and only propagate 
sizeInBytes in statistics.
-      Statistics(sizeInBytes = sizeInBytes)
+      val size = deserFactor.map { factor =>
+        BigInt(roundToBigInteger(sizeInBytes.doubleValue * 
deserFactorDistortion * factor, UP))
 
 Review comment:
   There is an alternative method but for very huge numbers it is not as 
precise:
   ```
   BigDecimal(Math.ceil(sizeInBytes.doubleValue * deserFactorDistortion * 
factor)).toBigInt()
   ```
   
   Although I doubt at such a huge numbers it would really make a difference, 
like:
   ```
   scala> val n = 16585485383180055897D
   n: Double = 1.6585485383180057E19
   
   scala> val withCeil = BigDecimal(Math.ceil(n)).toBigInt()
   withCeil: scala.math.BigInt = 16585485383180057000
   
   scala> val withGuava = BigInt(roundToBigInteger(n, 
java.math.RoundingMode.UP))
   withGuava: scala.math.BigInt = 16585485383180056576
   ```
    

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to