[ https://issues.apache.org/jira/browse/SPARK-15678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877575#comment-15877575 ]
Kazuaki Ishizaki edited comment on SPARK-15678 at 2/22/17 6:37 AM: ------------------------------------------------------------------- How about inserting {{spark.catalog.refreshByPath()}} as follows? {code} spark.range(1000).write.mode("overwrite").parquet(dir) spark.catalog.refreshByPath(dir) // insert a NEW statement val df1 = spark.read.parquet(dir) df1.count f(df1).count {code} was (Author: kiszk): How about insert {{spark.catalog.refreshByPath()}} as follows? {code} spark.range(1000).write.mode("overwrite").parquet(dir) spark.catalog.refreshByPath(dir) // insert a NEW statement val df1 = spark.read.parquet(dir) df1.count f(df1).count {code} > Not use cache on appends and overwrites > --------------------------------------- > > Key: SPARK-15678 > URL: https://issues.apache.org/jira/browse/SPARK-15678 > Project: Spark > Issue Type: Bug > Affects Versions: 2.0.0 > Reporter: Sameer Agarwal > Assignee: Sameer Agarwal > Fix For: 2.0.0 > > > SparkSQL currently doesn't drop caches if the underlying data is overwritten. > {code} > val dir = "/tmp/test" > sqlContext.range(1000).write.mode("overwrite").parquet(dir) > val df = sqlContext.read.parquet(dir).cache() > df.count() // outputs 1000 > sqlContext.range(10).write.mode("overwrite").parquet(dir) > sqlContext.read.parquet(dir).count() // outputs 1000 instead of 10 <---- We > are still using the cached dataset > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org