luckyliush opened a new issue, #16135:
URL: https://github.com/apache/dolphinscheduler/issues/16135

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### What happened
   
   I stopped the workflow manually while the workflow was running, and then the 
status of the workflow was always ready to stop. In the metadata information, 
state is 4 and end_time is null. Although the spark task of this workflow is 
stopped, the running time of this workflow continues to increase.
   
   ### What you expected to happen
   
   I think it is the SUB_PROCESS node that is used, because after I manually 
stopped the workflow without using the SUB_PROCESS node, the situation 
described above did not occur.
   
   ### How to reproduce
   
   If you want to reproduce this situation, you can create a workflow A related 
to the spark on yarn task, and then use the SUB_PROCESS node in the new 
workflow to reference workflow A. Manually stop the workflow during the running 
phase and observe the phenomenon.
   
   ### Anything else
   
   _No response_
   
   ### Version
   
   3.1.x
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to