It's by design, if you expect a task to fail and want to continue you
need to either put it in a begin ... rescue block.

or handle it within the shell remotely

ie:

run "rm -rf somefile"

of if it doesn't have a variant like rm or mkdir -p then

run "/stopTomcat;true"

that will make sure it always exits as if it worked regardless of the
actual exit code.

You will find there's a performance boost if combine multiple runs
into a single shell command.

ie:

run %Q{
  set +e;
  command1;
  command2;
  command3;
  set -e;
  command4;
  command5;
}

Commands 1-3 will run regardless of exit code; Commands 4-5 will make
the entire command fail and stop at the command that failed.

I wrote a little helper for myself to clean this kind of execution list

http://gist.github.com/188784

On Thu, Sep 17, 2009 at 4:02 PM, pete <[email protected]> wrote:
>
> Hi-
>
> I use Cap for some sys admin stuff, and I am finding that if one "run"
> command fails, that the whole task bombs out, why is this?  I didn't
> see this anywhere in the docs.  Is there a way to get around this?
>
> For example, if I do the following:
>
> task :cleanup, :roles => "somehost" do
> run "/stopTomcat"
> run "rm somefile"
> run "..."
> end
>
> If Tomcat is not running, the task exits, OR, if "somefile" does not
> exist, the task exits without running the rest of the commands.
>
> Thanks!
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Capistrano" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.co.uk/group/capistrano?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to