rickchengx commented on PR #12118:
URL: 
https://github.com/apache/dolphinscheduler/pull/12118#issuecomment-1261660941

   > IMHO, i think that pid file may not a good idea, because pid just can keep 
only one process pid
   > 
   > 1. If there are two Master process running at same time, but this status 
function just find one. Becuase if use just call start.sh may start a process 
again.
   > 2. If user use `kill` command, cannot clear this pid
   > 3. Some misopeation may clear or modify this pid file.
   >    i think the result of using the `ps -ef` check is more accurate and 
reliable. Personal opinion,just for reference.
   
   Hi, @DarkAssassinator , thanks a lot for the comment. Sorry for the late 
response.
   
   > If the user starts DS twice (or more) using dolphinscheduler-daemon.sh, 
the pid file will be overwritten, making it impossible to stop the initially 
started DS cluster.
   
   1. In fact, the purpose of this PR is to limit the user to start DS twice. 
When the user starts a server, it will first determine whether it has been 
started. If the server is already started, the script will cancel this launch.
   
   2. As for using pid file to check the status may not be a good option, for 
example pid may be modified artificially. But currently the script uses the pid 
file to kill the process, facing the same problem. And the original approach 
(`grep #DOLPHINSCHEDULER_HOME`) is also likely to cause errors. Since there is 
no complete guarantee that other processes do not meet the filter criteria.
   
   3. I think the script should take the **same approach** to get the process 
when checking the process status and killing the process (`grep ...` or check 
the `pid`). And I personally prefer to check the `pid` file because it's a way 
used by a lot of programs on linux.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to