We have a job executing a shell script (on a slave) to restart one dev appserver on a remote server if it does not work properly (dont bother why its not working):

  # BUILD_ID=dontKillMe
ssh [email protected] exec /path/to/appserver/force_restart_script arg1

The "force_restart_script" on the *remote* server doing the following:

  # find and kill the old appserver process
  kill_appserver_script arg1
  # start appserver with specified argument
  /path/to/appserver/start_appserver_script arg1

At the end, the old appserver process was killed, and the new appserver process was also terminated. How can i keep this from happening? The BUILD_ID=dontKillMe in job configureation doesn't seem to work (that would only keep the ssh from being terminated on the slave, right?), or I should actually set the BUILD_ID in the remote ssh shell?

Any help is very much appreciated!

-jv

Am 26.04.2013 10:16, schrieb Riccardo Foschia:
Hi,

Take a look at https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller section "If your build wants to leave a daemon running behind..."

Greetings,
Riccardo

Am 26.04.2013 10:08, schrieb hezjing:
Hi

I have a job which will be run in an Linux slave.

This job will execute a shell command to start a server process which will
run forever. Unfortunately this process is terminated when the job is
finished.

When I tested this using PuTTY, the server process is still alive after I
logged-in and out several times.

May I know how to keep a Unix process alive after the job is completed?




--
You received this message because you are subscribed to the Google Groups "Jenkins 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to