aokolnychyi commented on code in PR #52764:
URL: https://github.com/apache/spark/pull/52764#discussion_r2473840122
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala:
##########
@@ -365,6 +368,44 @@ class CacheManager extends Logging with
AdaptiveSparkPlanHelper {
}
}
+ private[sql] def lookupCachedTable(
+ name: Seq[String],
+ timeTravelSpec: Option[TimeTravelSpec],
+ conf: SQLConf): Option[LogicalPlan] = {
+ val cachedRelations = findCachedRelations(name, timeTravelSpec, conf)
+ cachedRelations match {
+ case cachedRelation +: _ =>
+ val nameWithTimeTravel = timeTravelSpec match {
+ case Some(spec) => s"${name.quoted} $spec"
+ case None => name.quoted
+ }
+ CacheManager.logCacheOperation(
+ log"Relation cache hit for table ${MDC(TABLE_NAME,
nameWithTimeTravel)}")
+ Some(cachedRelation)
Review Comment:
Well, we shouldn't have multiple matching relations after this change, but
`cachedData` is `IndexedSeq` to which we always prepend entries (so newer
entries are at the beginning of the sequence). We don't know which version of
the table is newer cause they are strings. In Iceberg, for instance, they are
random UUIDs. That said, this piece should always take the newest matching
entry but we expect to only be one.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]