zhongjiajie commented on code in PR #12297: URL: https://github.com/apache/dolphinscheduler/pull/12297#discussion_r991744911
########## docs/docs/en/DSIP.md: ########## @@ -27,7 +27,7 @@ Current DSIPs including all DSIP still work-in-progress, you could see in [curre Past DSIPs including all DSIP already done or retired for some reason, you could see in [past DSIPs][past-DSIPs] -## DSIP Process +## DSIP Workflow Review Comment: Wrong change ```suggestion ## DSIP Process ``` ########## docs/docs/en/architecture/design.md: ########## @@ -55,7 +55,7 @@ - **WorkerManagerThread** is mainly responsible for the submission of the task queue, continuously receives tasks from the task queue, and submits them to the thread pool for processing; - - **TaskExecuteThread** is mainly responsible for the process of task execution, and the actual processing of tasks according to different task types; + - **TaskExecuteThread** is mainly responsible for the workflow of task execution, and the actual processing of tasks according to different task types; Review Comment: personally think this should not change too ########## docs/docs/en/architecture/cache.md: ########## @@ -2,9 +2,9 @@ ## Purpose -Due to the large database read operations during the master-server scheduling process. Such as read tables like `tenant`, `user`, `processDefinition`, etc. Operations stress read pressure to the DB, and slow down the entire core scheduling process. +Due to the large database read operations during the master-server scheduling workflow. Such as read tables like `tenant`, `user`, `processDefinition`, etc. Operations stress read pressure to the DB, and slow down the entire core scheduling workflow. Review Comment: I think this content also should not change. Am I right @caishunfeng -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
