Jason wrote:
> Aggh, you've just sparked an old suppressed memory.
>
> I had to build this huge BIND install and hardening
> script one time. I created a function for just about
> every task in the script and used a function like that
> to check return values. In each of the task functions,
> I'd set a variable for function name (func_name).
>
> Something like:
>
> check_return ()
> {
> if [ $? -ne 0 ]
> then
> echo "$func_name went foobar"
> else
> echo "A-OK"
> fi
> }
>
> I'm no shell guru. Isn't there a better way to do
> error checking in huge shell scripts?
Every command prints something to stderr when it fails. Most commands
print their name then they fail. What exactly are you gaining by
printing another error message?
Okay, let's say you were writing this to a log instead of to stdout.
Then I'd do it like this.
function logstatus() {
tmpfile=...
if "$@" 2> "$tmpfile"
then
logger -p user.info "okay: $@"
else
logger -p user.err "failed: $@: `cat -v \"$tmpfile\"`"
fi
rm "$tmpfile"
}
logstatus take out trash
logstatus ls -l /etc/notthere
Also, I often put -e on the shebang line (#!/bin/sh -e) which makes
the script fail if any command fails. You can also use the trap
builtin to execute arbitrary code when something fails. However,
there's no way to get information at that point about what failed
or how, unless the shell was killed (got a signal).
> "Use python"
That would work too. (-:
--
Bob Miller K<bob>
kbobsoft software consulting
http://kbobsoft.com [EMAIL PROTECTED]
_______________________________________________
EuG-LUG mailing list
[EMAIL PROTECTED]
http://mailman.efn.org/cgi-bin/listinfo/eug-lug