Johannes Schindelin <johannes.schinde...@gmx.de> writes:

>> +    ret = cmd_checkout(args.argc, args.argv, prefix);
>
> I guess it is okay to run that, but the cmd_() functions are not *really*
> meant to be called this way... Personally, I would be more comfortable
> with a `run_command()` here, i.e. with a spawned process, until the time
> when checkout.c/checkout.h have learned enough API for a direct call.

Thanks for pointing it out.  I'll add a bit more for those who are
reading from sideline (iow, if you are Dscho, you do not have to
read the remainder, as I know Dscho knows all of this).

It is not OK to use cmd_$foo() as subroutine; depending on the value
of $foo, where the call is made and the number of times the call is
made, it may happen to work OK in the current code, but that is a
ticking timebomb waiting to go off.

This is primarily because cmd_$foo() is designed to be replacement
of "main()" in usual programs---it is allowed to assume the global
variables it uses have their initial values and nobody cares the
state it leaves behind when it returns.  Argument parser may flip
bits in file-scope static variables to record which options are
seen, assuming that these variables are initialized to all-zero, and
that assumption would not hold for the second call to cmd_$foo(),
etc.  cmd_$foo() also is allowed to call die() when there is an
unrecoverable error condition in the context of carrying out $foo; a
scripted Porcelain that used $foo as a building block to implement a
larger operation (e.g. "stash" that used "checkout" to switch to a
different branch) may want to react to the failure and do something
extra (i.e. "git checkout || do something more").  

Using run_command() interface would not cause any of these problems;
the subcommand will run in a clean environment, and the caller can
react to its failure.

Reply via email to