morningman commented on a change in pull request #5947:
URL: https://github.com/apache/incubator-doris/pull/5947#discussion_r645929109



##########
File path: fe/fe-core/src/main/java/org/apache/doris/common/Config.java
##########
@@ -1396,4 +1396,10 @@
      */
     @ConfField(mutable = true, masterOnly = true)
     public static int max_dynamic_partition_num = 500;
+
+    /*
+     * Control the max num of backup job per db
+     */
+    @ConfField(mutable = true, masterOnly = true)
+    public static int max_backup_job_num_per_db = 100;

Review comment:
       ```suggestion
       public static int max_backup_restore_job_num_per_db = 100;
   ```
   And I think 100 by default is too large. How about 10?

##########
File path: fe/fe-core/src/main/java/org/apache/doris/backup/BackupHandler.java
##########
@@ -392,11 +410,49 @@ private void restore(Repository repository, Database db, 
RestoreStmt stmt) throw
         catalog.getEditLog().logRestoreJob(restoreJob);
 
         // must put to dbIdToBackupOrRestoreJob after edit log, otherwise the 
state of job may be changed.
-        dbIdToBackupOrRestoreJob.put(db.getId(), restoreJob);
+        addBackupOrRestoreJob(db.getId(), restoreJob);
 
         LOG.info("finished to submit restore job: {}", restoreJob);
     }
 
+    private void addBackupOrRestoreJob(long dbId, AbstractJob job) {
+        jobLock.lock();
+        try {
+            Deque<AbstractJob> jobs = 
dbIdToBackupOrRestoreJobs.computeIfAbsent(dbId, k -> Lists.newLinkedList());
+            if (jobs.size() == Config.max_backup_job_num_per_db) {
+                jobs.removeFirst();
+            }
+            AbstractJob lastJob = jobs.peekLast();
+            // only save the latest job

Review comment:
       Better add more comment to explain why we need to remove the duplicate 
job at the last of the queue.
   I think the reason is because we may add same job with different jobs state 
when replaying edit log.

##########
File path: fe/fe-core/src/main/java/org/apache/doris/backup/BackupHandler.java
##########
@@ -392,11 +410,49 @@ private void restore(Repository repository, Database db, 
RestoreStmt stmt) throw
         catalog.getEditLog().logRestoreJob(restoreJob);
 
         // must put to dbIdToBackupOrRestoreJob after edit log, otherwise the 
state of job may be changed.
-        dbIdToBackupOrRestoreJob.put(db.getId(), restoreJob);
+        addBackupOrRestoreJob(db.getId(), restoreJob);
 
         LOG.info("finished to submit restore job: {}", restoreJob);
     }
 
+    private void addBackupOrRestoreJob(long dbId, AbstractJob job) {
+        jobLock.lock();
+        try {
+            Deque<AbstractJob> jobs = 
dbIdToBackupOrRestoreJobs.computeIfAbsent(dbId, k -> Lists.newLinkedList());
+            if (jobs.size() == Config.max_backup_job_num_per_db) {

Review comment:
       ```suggestion
               where (jobs.size() >= Config.max_backup_job_num_per_db) {
               ...
               }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to