1996fanrui commented on code in PR #1910:
URL:
https://github.com/apache/incubator-streampark/pull/1910#discussion_r1005806252
##########
streampark-flink/streampark-flink-kubernetes/src/main/scala/org/apache/streampark/flink/kubernetes/FlinkTrackController.scala:
##########
@@ -236,3 +240,19 @@ object MetricCache {
def build(): MetricCache = new MetricCache()
}
+
+class ArchivesCache {
+ def put(k: String, v: String): Unit = cache.put(k, v)
+
+ def get(k: String): String = cache.getIfPresent(k)
+
+ def asMap(): Map[String, String] = cache.asMap().toMap
+
+ def cleanUp(): Unit = cache.cleanUp()
+
+ val cache: Cache[String, String] = Caffeine.newBuilder.build()
Review Comment:
The cache size or expire policy are not set, why use cache here?
##########
streampark-flink/streampark-flink-kubernetes/src/main/scala/org/apache/streampark/flink/kubernetes/watcher/FlinkJobStatusWatcher.scala:
##########
@@ -276,7 +277,9 @@ class FlinkJobStatusWatcher(conf: JobStatusWatcherConfig =
JobStatusWatcherConfi
}
} else if (isConnection) {
logger.info("The deployment is deleted and enters the task failure
process.")
- FlinkJobState.FAILED
+ val jobId = trackId.jobId
+ val archivePath = trackController.flinkArchives.get(jobId)
+ FlinkJobState.of(FetchArchives.fetchArchives(trackId.jobId,
archivePath))
Review Comment:
1. Should we clear the jobId in the `flinkArchives` after get the jobState?
2. If the StreamPark backend service is restarted. how to get the
archivePath?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]