aokolnychyi commented on a change in pull request #1801:
URL: https://github.com/apache/iceberg/pull/1801#discussion_r529031578



##########
File path: 
spark3/src/main/java/org/apache/iceberg/spark/procedures/BaseProcedure.java
##########
@@ -56,7 +65,9 @@ protected BaseProcedure(TableCatalog tableCatalog) {
 
     T result = func.apply(icebergTable);
 
-    refreshSparkCache(ident, sparkTable);
+    if (refreshSparkCache) {
+      refreshSparkCache(ident, sparkTable);
+    }

Review comment:
       This raises a good point. I did not consider external modifications and 
my motivation was to keep data cached if procedures don't modify the underlying 
data as caching is usually expensive.
   
   Do we think external modifications should invalidate Spark's cache? It seems 
like a good idea but there is one concern: it does not match how things work 
right now. If I understand correctly, Spark's cache does not know anything 
about external modifications and stays valid even if someone modifies the 
underlying table in Presto or another Spark job. So the only case when the 
cache is invalidated is when an operation happens in a given session.
   
   I don't feel strongly about this but I want us to think through here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to