codope commented on code in PR #6159:
URL: https://github.com/apache/hudi/pull/6159#discussion_r927016415
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -638,9 +638,37 @@ object HoodieSparkSqlWriter {
SyncUtilHelpers.runHoodieMetaSync(impl.trim, properties, fs.getConf,
fs, basePath.toString, baseFileFormat)
})
}
+
+ // Since Hive tables are now synced as Spark data source tables which are
cached after Spark SQL queries
+ // we must invalidate this table in the cache so writes are reflected in
later queries
+ if (metaSyncEnabled) {
+ getHiveTableNames(hoodieConfig).foreach(name => {
+ val qualifiedTableName = String.join(".",
hoodieConfig.getStringOrDefault(HIVE_DATABASE), name)
+ if (spark.catalog.tableExists(qualifiedTableName)) {
+ spark.catalog.refreshTable(qualifiedTableName)
+ }
Review Comment:
We can move this inside the if-block above. It's basically the same
condition.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]