Github user cloud-fan commented on the issue: https://github.com/apache/spark/pull/15090 In this PR, we use `ColumnStats` to represent the column statistics in memory, and persist in to hive metastore by converting it to string with format `a=1,b=2`. This brings 2 problems: 1. `ColumnStats` always contains all the fields, which means a int column also has the `numTrues` property, although it's None 2. it's unefficient to convert `ColumnStats` to string in format `a=1,b=2` and parse it back. My suggestion: We can just use `InternalRow` to represent the column statistics in memory. The schema of this `InternalRow` is defined as contract, e.g. for int column, the schema is `<max: int, min: int, ndv: long, ...>`. When persist it to hive metastore, we first serialize the row to binary(we can use `UnsafeRow`, then the serializaiton is free), and then covert bianry to hex string(or use other encodings). This can solve the 2 problems, what do you think?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org