ruanwenjun opened a new issue #5775: URL: https://github.com/apache/dolphinscheduler/issues/5775
**Describe the question** All the code in this issue is at `AbstractCommandExecutor`. If you have seen the task logging code in DolphinScheduler, you probably know that we currently use the production-consumption model for logging. We use one thread to write the log to the collection. https://github.com/apache/dolphinscheduler/blob/b114d330ac1fa7de27e09cc73c0804a7536f3b28/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/AbstractCommandExecutor.java#L351-L371 And we use another thread to consumer the log from collection and write to log file. https://github.com/apache/dolphinscheduler/blob/b114d330ac1fa7de27e09cc73c0804a7536f3b28/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/AbstractCommandExecutor.java#L375-L390 https://github.com/apache/dolphinscheduler/blob/b114d330ac1fa7de27e09cc73c0804a7536f3b28/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/AbstractCommandExecutor.java#L549-L562 The problem is that when executing between line 558 and line 560, the logs added to the collection will be lost.(`logHandler.accept` will take some time). **Which version of DolphinScheduler:** -[1.3.6-release] -[dev] **Describe alternatives you've considered** There are two ways to solve the issue: 1. We can design a new collection to store the log instead of using `Collections.synchronizedList`. 2. We can direct use a read write lock to in `AbstractCommandExecutor` to solve the concurrent problem, instead of using `Collections.synchronizedList` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
