MaskedMenhxy opened a new issue, #16474: URL: https://github.com/apache/dolphinscheduler/issues/16474
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened The phenomenon is as follows:  This is logs  ### What you expected to happen I want to be able to continue executing tasks normally or discard old tasks that are waiting in serial ### How to reproduce Troubleshooting steps: 1. The workflow instance uses the execution strategy of serial waiting 2. The current master server is overloaded 3. Make the current master is not in active master list ### Anything else My analysis: The master node may be overloaded, resulting in the inactivation of the master node. In this case, The generated workflow instance fails to be updated from “wait by serial_wait strategy ” to “submit from serial_wait strategy”. Then the status value of the workflow instance in the database stays at "wait by serial_wait strategy", and the next scheduled workflow instance, Before updating himself from “wait by serial_wait strategy ” to “submit from serial_wait strategy”, A workflow instance whose id is smaller than its own is in the “wait by serial_wait strategy” state. Procedure The “submit from serial_wait strategy ” status is not updated. Because of this, all future workflow instances will stay in the "wait by serial_wait strategy" state, resulting in task stacking. The relevant code is as follows: org.apache.dolphinscheduler.service.process.ProcessServiceImpl#saveSerialProcess   ### Version 3.2.x ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
