Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/14971#discussion_r79121955 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala --- @@ -378,6 +380,47 @@ private[hive] class HiveClientImpl( val properties = Option(h.getParameters).map(_.asScala.toMap).orNull + val excludedTableProperties = Set( + // The following are hive-generated statistics fields. Currently, only total_size and + // row_count are used to populate the dedicated field `stats`. + // TODO: stats should include all the other three fields. + StatsSetupConst.COLUMN_STATS_ACCURATE, + StatsSetupConst.NUM_FILES, + StatsSetupConst.NUM_PARTITIONS, + StatsSetupConst.ROW_COUNT, + StatsSetupConst.RAW_DATA_SIZE, --- End diff -- how about we only handle the 2 we need? i.e. `TOTAL_SIZE` and `RAW_DATA_SIZE`. Then we don't need to do an extra `getTable` call in `alterTable`, which may cause performance regression. Ideally the rule is, we only drop the hive properties that we moved to other places, so that we can reconstruct them without an extra `getTable` call.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org