Github user zzl1787 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19129#discussion_r153467044
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1587,6 +1580,10 @@ options.
           Note that this is different from the Hive behavior.
         - As a result, `DROP TABLE` statements on those tables will not remove 
the data.
     
    + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. 
See
    +   [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and
    +   [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for 
details.
    --- End diff --
    
    Hi, I'm new to spark.  I wonder how to disable metadata caching after 
deleting this conf. I created an external table, and the parquet files in 
specified location are updated daily, So I want to disable metadata caching 
rather than executing 'refresh table xxx'.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to