gengliangwang commented on a change in pull request #34092:
URL: https://github.com/apache/spark/pull/34092#discussion_r715321250



##########
File path: core/src/main/scala/org/apache/spark/status/AppStatusListener.scala
##########
@@ -1253,44 +1254,46 @@ private[spark] class AppStatusListener(
     toDelete.foreach { j => kvstore.delete(j.getClass(), j.info.jobId) }
   }
 
+  private case class StageCompletionTime(
+      stageId: Int,
+      attemptId: Int,
+      completionTime: Long)
+
   private def cleanupStages(count: Long): Unit = {
     val countToDelete = calculateNumberToRemove(count, 
conf.get(MAX_RETAINED_STAGES))
     if (countToDelete <= 0L) {
       return
     }
 
+    val stageArray = new ArrayBuffer[StageCompletionTime]()
+    val stageDataCount = new mutable.HashMap[Int, Int]()
+    kvstore.view(classOf[StageDataWrapper]).forEach { s =>
+      // Here we keep track of the total number of StageDataWrapper entries 
for each stage id.
+      // This will be used in cleaning up the RDDOperationGraphWrapper data.
+      if (stageDataCount.contains(s.info.stageId)) {
+        stageDataCount(s.info.stageId) += 1
+      } else {
+        stageDataCount(s.info.stageId) = 1
+      }
+      if (s.info.status != v1.StageStatus.ACTIVE && s.info.status != 
v1.StageStatus.PENDING) {
+        val candidate =
+          StageCompletionTime(s.info.stageId, s.info.attemptId, 
s.completionTime)
+        stageArray.append(candidate)
+      }
+    }
+
     // As the completion time of a skipped stage is always -1, we will remove 
skipped stages first.
     // This is safe since the job itself contains enough information to render 
skipped stages in the
     // UI.
-    val view = kvstore.view(classOf[StageDataWrapper]).index("completionTime")
-    val stages = KVUtils.viewToSeq(view, countToDelete.toInt) { s =>
-      s.info.status != v1.StageStatus.ACTIVE && s.info.status != 
v1.StageStatus.PENDING
-    }
-
-    val stageIds = stages.map { s =>

Review comment:
       I thought about keeping the original code for LevelDB here. But after 
investigation, I find that:
   The default retained stages size is 1000, so as per
   ```
     private def calculateNumberToRemove(dataSize: Long, retainedSize: Long): 
Long = {
       if (dataSize > retainedSize) {
         math.max(retainedSize / 10L, dataSize - retainedSize)
       } else {
         0L
       }
     }
   ```
   The `stages` here normally has a length of 100. Finding stage id inside 
LevelDB 100 times is not efficient, comparing to the new code. 
   So I decide to make it simple and use the same code for both InMemoryStore 
and LevelDB.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to