Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15959#discussion_r89035720
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
---
@@ -58,60 +61,127 @@ case class Statistics(
}
}
+
/**
- * Statistics for a column.
+ * Statistics collected for a column.
+ *
+ * 1. Supported data types are defined in `ColumnStat.supportsType`.
+ * 2. The JVM data type stored in min/max is the external data type (used
in Row) for the
+ * corresponding Catalyst data type. For example, for DateType we store
java.sql.Date, and for
+ * TimestampType we store java.sql.Timestamp.
+ * 3. For integral types, they are all upcasted to longs, i.e. shorts are
stored as longs.
+ *
+ * @param ndv number of distinct values
+ * @param min minimum value
+ * @param max maximum value
+ * @param numNulls number of nulls
+ * @param avgLen average length of the values. For fixed-length types,
this should be a constant.
+ * @param maxLen maximum length of the values. For fixed-length types,
this should be a constant.
*/
-case class ColumnStat(statRow: InternalRow) {
+// TODO: decide if we want to use bigint to represent ndv and numNulls.
+case class ColumnStat(
+ ndv: Long,
+ min: Any,
+ max: Any,
+ numNulls: Long,
+ avgLen: Long,
+ maxLen: Long) {
- def forNumeric[T <: AtomicType](dataType: T): NumericColumnStat[T] = {
- NumericColumnStat(statRow, dataType)
- }
- def forString: StringColumnStat = StringColumnStat(statRow)
- def forBinary: BinaryColumnStat = BinaryColumnStat(statRow)
- def forBoolean: BooleanColumnStat = BooleanColumnStat(statRow)
+ /**
+ * Returns a map from string to string that can be used to serialize the
column stats.
+ * The key is the name of the field (e.g. "ndv" or "min"), and the value
is the string
+ * representation for the value. The deserialization side is defined in
[[ColumnStat.fromMap]].
+ *
+ * As part of the protocol, the returned map always contains a key
called "version".
+ */
+ def toMap: Map[String, String] = Map(
--- End diff --
Should we also have a flag to indicate if a column stat is valid or not ?
Few example cases when it would help:
- some bug in the code which lead to all stats generated using version XYZ
to be incorrect. We want clients to not trust the stats in such case.
- If for some reason we read bad stat from metastore (eg. min > max)
- Storing `min` and `max` for string types can be risky as you are at the
mercy of user data. In past this has bit me wherein a column value was a super
large string. The step where stats are generated needs to guard against such
cases and set the "stat-is-invalid" flag.
All these can be handled by returning some special value (`None`) to client
but you lose the information that some stats were there.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]