Github user wzhfy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14712#discussion_r75569351
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
 ---
    @@ -33,7 +34,7 @@ import 
org.apache.spark.sql.catalyst.catalog.{CatalogRelation, CatalogTable}
      * Right now, it only supports Hive tables and it only updates the size of 
a Hive table
      * in the Hive metastore.
      */
    -case class AnalyzeTableCommand(tableName: String) extends RunnableCommand {
    +case class AnalyzeTableCommand(tableName: String, noscan: Boolean = true) 
extends RunnableCommand {
    --- End diff --
    
    Recalculation incurs high cost, it should be triggered by uses like DBAs. 
We can have an mechanism to incrementally update the stats in the future, but 
that will need some well designed algorithms (especially for histograms) and 
definition of confidence interval.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to