Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15959#discussion_r89257721
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
 ---
    @@ -58,60 +61,170 @@ case class Statistics(
       }
     }
     
    +
     /**
    - * Statistics for a column.
    + * Statistics collected for a column.
    + *
    + * 1. Supported data types are defined in `ColumnStat.supportsType`.
    + * 2. The JVM data type stored in min/max is the external data type (used 
in Row) for the
    + * corresponding Catalyst data type. For example, for DateType we store 
java.sql.Date, and for
    + * TimestampType we store java.sql.Timestamp.
    + * 3. For integral types, they are all upcasted to longs, i.e. shorts are 
stored as longs.
    + * 4. There is no guarantee that the statistics collected are accurate. 
Approximation algorithms
    + *    (sketches) might have been used, and the data collected can also be 
stale.
    + *
    + * @param distinctCount number of distinct values
    + * @param min minimum value
    + * @param max maximum value
    + * @param nullCount number of nulls
    + * @param avgLen average length of the values. For fixed-length types, 
this should be a constant.
    + * @param maxLen maximum length of the values. For fixed-length types, 
this should be a constant.
      */
    -case class ColumnStat(statRow: InternalRow) {
    +case class ColumnStat(
    +    distinctCount: BigInt,
    +    min: Option[Any],
    +    max: Option[Any],
    +    nullCount: BigInt,
    +    avgLen: Long,
    +    maxLen: Long) {
     
    -  def forNumeric[T <: AtomicType](dataType: T): NumericColumnStat[T] = {
    -    NumericColumnStat(statRow, dataType)
    -  }
    -  def forString: StringColumnStat = StringColumnStat(statRow)
    -  def forBinary: BinaryColumnStat = BinaryColumnStat(statRow)
    -  def forBoolean: BooleanColumnStat = BooleanColumnStat(statRow)
    +  /**
    +   * Returns a map from string to string that can be used to serialize the 
column stats.
    +   * The key is the name of the field (e.g. "ndv" or "min"), and the value 
is the string
    +   * representation for the value. The deserialization side is defined in 
[[ColumnStat.fromMap]].
    +   *
    +   * As part of the protocol, the returned map always contains a key 
called "version".
    +   * In the case min/max values are null (None), they will be stored as 
"<null>".
    --- End diff --
    
    I may miss the discussion, why not just remove the `max`, `min` entries in 
the map if they are null?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to