> Well if you get down to brass tacks, most people are functionally
> illiterate.

True enough.
And they're proud to say "I was never any good at math" ...
And why do they think "walking encyclopedia" is an insult?

> At least in Canada, which is the only country that I know
> dares try measuring such stuff.  (Canada's definition of "functionally
> illiterate" is "can't read at a grade 8 level".  Which is supposed to
> be the level that, for instance, newspapers aim for.  Based on
> personal experience, I'd say that the USA does a worse job of
> education than Canada...)

In Massachusetts, I think it's about an 8th grade reading level that
is required for the "needs improvement" on the 10th grade high-stakes
test that is, pathetically, the required score to graduate high-school
(12th grade). So I guess we're doing a little better than the
Canadians, but still pretty sad, I'll agree.

> There are so many reasons why Dijkstra could have said it that there
> is no point in guessing which specific items motivated him.

Amen.

But my point is most introductory CS courses are, well, rather
introductory.  They usually waste a semester on syntax and then
finally get into data structures 2nd term ... and then expect you to
write a compiler for your term project without having really given you
any decent theory. The big change was BASIC/FORTRAN => PL/I or PASCAL
=> C/C++ => Java.  Such a change.

There is an alternative course. http://mitpress.mit.edu/sicp/
Had Guido read this book, he would understand why Python has most but
not all of Lisp-ness, and he might even understand closures and
continuations.

Were this book used at more institutions as the introductory book it's
intended as, rather than as a upperclass/graduate text for the Scheme
elective, perhaps map { } and recursion wouldn't be so foreign to so
many qualified programmers. The very lack of syntax in Lisp is ideal
for a tutorial language ... it's covered in the first lecture, not the
first semester, and we're on to semantics and doing fun programs the
first week.

The other sane approach is teaching MACHINE CODE first --  on a simple
machine, pure binary, Knuth's MIX or PDP-8 -- before showing
Assembler. Then quick history of languages and the art. Then assign
tasks with free choice of language. But they'll say that doesn't teach
employable skills. So what? Thinking is what we shoudl be teaching.

A degree should last a lifetime. Skill in a given programming language
rarely lasts 10 years, if that.

Seen any jobs in Algol Pascal PL/1 Ada Smalltalk APL lately? They were
once commercial not just academic, and that justified using them
academically, in introductory programs intended for "users" not just
for future professors. I only once saw an ad for Eiffel in the US,
alas. I decided it was a bad way to bet a career. C lives on in C++,
with some managers not understanding why you should do some layers of
code in C++ instead of Java *sigh*  Java have it's 10 year run, but
should be ripe for blindsiding soon, it's in bloat mode.

COBOL FORTRAN and LISP being the originals that have lasted 50+ years
are an exception of sorts, except they've rejuvenated every 5-15
years. As Perl is now doing, again.


Fortran, Cobol, and Lisp have survived by stealing from all the new
languages. (FORTRAN II initially didn't even have THEN, and eventually
Fortran got an ELSE too. )

An old joke that is eerily true -- "I don't know what language
accounting software in 2030 will be written in, but it will be called
COBOL2020."
  (Except our new 3rd party accounting package purchases are built in
Java and C++, which makes me ask vendors hard questions about numeric
precision. Arbitrary Packed Decimal beats BigNum if you care about
both speed and precision. The death of cobol is still greatly
exagerated.)

> > A mathematician would usually rather use a reduction to a previously
> > solved problem than a counting argument.

> Sorry, this is BS.

No, it's the punch line to the old joke,
"thus reducing it to the previously solved problem" -- leaving the
hotel to burn down or the hot bar hostess to solve her own problem, or
whatever the setup was this time.

One of those stereotypes which is funny because it is so true.

> I am speaking here as an almost mathematician. (I
> came about a month from finishing my PhD then
> encountered a need for
> money...)

That's closer than I got to a professional degree.
(But Mom didn't get her doctorate until she was 55, so I've got a few
years left. And besides, why do I need an advanced degree if the local
Uni will let me teach anyway?)

If I go back in pure math, it'll only be if I find a Constructivist or
Non-Standard Analyst to study with ... those Classical Analysts
disturb me. (Giving "Baby Rudin" to a Sophmore is cruel; it took me
years to discover there was a better way, but by then it was too
late.) But I'm more likely to do something in computational dynamics
with a blend of physics next time, if there can be a next time ...

I try to make a point to stay in contact with the maths. Early in my
computer carreer, I met a another "applied mathematician" in the
computer field at a job interview. He told me to check out the local
Math. Assoc. Am. chapter, as a way to stay in touch. Best advice I
ever got on a job interview. So I still enjoy the company of
mathematicians, and take some of the magazines.  Alas, I'll miss the
NES MAA meetings again this month, it was Mothers Day weekend last
spring and I have unavoidable family this fall too :-(

We had a great public math lecture locally recently. You might have
enjoyed it. http://use.perl.org/~n1vux/journal/31308


So by Credentials, I am not a Mathematician either.
But I was a mathematician before I matriculated, and still am; that is
the way I think, to the annoyance of many normal people. Internally, I
do not self-identify so much as a programmer but as an applied
mathematician, currently working in business software, bringing the
technology of the 1990's to the accountants (more fun than it sounds!)
with a hobby in Perl and Linux and a research interest in weather
models.

> Mathematicians are happy to use any technique they can.

Of course.

But the punch-line to the old meta-joke is right, we'd still rather
#include an old proof we trust than construct a new argument. Code you
don't write doesn't have bugs; reuse of proofs is even more powerful
than of code, as rigor matters in math. (As it ought to in computer
science, per Dijskstra and Hoare.)

> But counting arguments are often preferred

Counting arguments are quite popular in some branches of number
theory, and can be moderately effective in (finite) algebra and
perhaps (discrete) geometry. They are notable when they work for
analysis. When the pigeon hole principle applies, it's slick. We
Constructivists may be somewhat disappointed with a "result" that only
shows a collision must exist somewhere without saying where, though.

> because they tend to be more
> straightforward, and they tend to be more informative.

Constructivists (as I might be) certainly like informative proofs.
(If I'm more of a Non-Standard Intuitionist infinitesimalist, maybe I
won't be quite so dogmatic about informative proofs as a
Constructivist, but I'm sympathetic.)

> By more
> informative I mean that the counting argument often gives you
> something which can be used to produce either more precise or else
> follow-up results.

Constructivists mean my informative giving a construction of the
number(s) that exists (or the general form, or the pair in the
pigeon-hole with more than one).

> Reduction to a previously solved problem is used a lot simply because
> it is a more powerful technique.

Right. In direct and meta senses.

>  When that allows more elegant
> solutions, that can be a big win.

Elegant, sometimes.

Often, the result of the reductionist style is not so much elegant as
a strange, non obvious sequence of results, each trivial, resulting in
a surprise ending. The twisted underlying logic of sequence becomes
clear - there is meta-programming iteration in lemma-space -- the game
is to find a sequence of lemmas { T .. QED}  such that each step has a
simple reduction to a previous lemma in the sequence.

The value of this strange sequence is that each step can be verified
by the teaching assistant or referee easily, so the paper is less
likely to be returned for correction. This doesn't necessarily make
for beautiful mathematics ... but it makes for reliable mathematics.

This is not elegant, particularly not in the typical semester-long
textbook elucidation of the proof of  of the Central Theorem of
<CourseTitle>.  I'd enjoy those courses much more now, having seen the
uses for those theories and having seen the history of the development
of those theories ... at the time, it was rabbit from hat.


The formal technique to prove an iterative loop correct is to derive
the loop invariants and prove the inductive inference of the invariant
for 1 (or 0) and for N+1 given N, and termination. To *correctly* code
a loop, you have to have thought about the very conditions you'd use
to code it as a recursion.

In Damian's "SWIM" "Say What I Mean" rule, why not say it that way in
the first place?

Before you ask, why Yes, I HAVE  proved loops correct. Not in college.
Every time I've done it I was paid -- I was paid to learn how too.
Since then, onece every decade, I've found some loop that was not
going to be fun to debug or test adequately, and said, ok, we're going
Dijkstra on this, it will be more cost effective.

And I've probably debugged a more few loops that if I'd written the
invariants for first would have been right the first time.

Loops are often victims of off-by-one (fence-post) errors, all too
often sllently (until the buffer overflow attack hits).

Recursion tends not to fall victim to those, but is more likely to
just not halt at all, which is rather obvious first test.

Seems to me Recursion is easier to get right.

> However not always.  In fact there
> is a whole branch of mathematics devoted to nothing else.  (It is
> called combinatorics.)

Combinatorics is all about counting groups of things, yes.
 ... usually  by partitioning to the previously counted sets, and
adding the recursive count of the partition.

> On a side note, I remember being part of an interesting conversation
> on why students seem to find it easier to learn induction than
> recursion when they're almost identical.

Maybe beause we've had more centuries practice teaching induction to
freshmen than we've had teaching loop termination and recursion?

> Two big parts of the answer
> seem to be that induction is somewhat simpler in form, and the
> presentation has a more linear flow.

Linear flow does seem to be important to a significant percentage of
people. Lots of folks couldn't handle Twin Peaks, lots of folks can't
handle Exceptions or recursion. Nature or nurture?


Naive recursive FIbonacci implementation is easy to understand, it's
just hard for many to understand why it's slow, and how to fix it ...
that linear thinking problem again.

> People seem to have a hangup
> when reading code that is going to be executed multiple times.

Functions coded tail-recursively are not only easier for compilers to
optimize, they're easier for most people to comprehend than things
that recurse in the middle, agreed.  State pushed invisibly in the
middle is mind-numbing to the linear mind. The spatial mind seems to
grasp it easier. but geometers who program want speed in the graphical
engine :-)

This may be what makes Guido's "head explode" (as he said) with
Continuations, Closures, and anonymous blocks in general.

> It is
> more natural to say, "Here is 1.  OK, based on 1, here is 2.  Based on
> 2, here is 3.  And so on."

Yup.
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime ...
(Aside to the non-mathematicians if any are still reading - that's
ANOTHER  old joke ... 1 and 9 are NOT prime,. I know that.)

Point being the key with recursion or induction is getting the
predicate right; the point with a loop is getting the termination
condition right. Same thing.

> Also recursion tends to be more
> complicated.

If you only "test" your loop by "tseting" the whole program and hoping
it works, yeah, iteration may be less complicated.

If you annotate it with loop invariants, to show that it will
terminate and will do so on the right pass, not one early, not one
late, it's more complicated, because you've had to write it both ways.
The recursion carrys it's proof with it.

The C-style FOR loop in Perl is notorious for off-by-one.
The Perl-ish for $i (1..$#ARRAY) is equally notorious for off-by-one,
as is for $i ([EMAIL PROTECTED]).

(ASIDE - for(@ARRAY) and while (shift @ARRAY) are safer, of course,
and I'm not implicating them. But map {} @ARRAY is still a higher
order expression if the goal is to collect the results as opposed to
the side-effects; and shift grep { } @ARRAY will find the first, but
perhaps not quite as efficiently, esp if @ARRAY is big or
lazy-infinite, unless grep preserves lazyness. )

>For instance virtually no elementary math proofs have
> multiple base cases, but this is fairly common in recursive
> algorithms.

I would be willing to stipulated any proof with multiple base cases
wasn't elementary.

In*theoretical* recursion algorithms, one base case is (theoretically)
adequate, same as in induction.  Recursionon Trees and Tries still has
one true base, a null pointer, just in can be in 2 or 3 places. In
*applied* algorithms, the dirty data may seem to require multiple base
cases because there are multiple ways to run out of data worth
chasing. Suitable abstraction such that the current state object has a
"done" predicate can generally reduce the dirty practical algorithm to
the purity of the theoretical algorithm in a way that makes sure you
use all 3 checks whenever you use any of them (good encapsulation) and
lets you use a generic coding of the algorithm for multiple sorts of
trees to splice. This is goodness. C++ and Java people have strange
names like "Abstract Template" for algorithms that work if the data
has just enough interface ... it's just normal DWIM polymorphism to
Lisp, Smalltalk, and Perl hackers. Guido probably thinks Python is the
only language to have this besides Lisp, since Paul Graham said he had
it.
(This reminds me of the joke about the walled area in heaven with the
QUIET signs on the outside.)

> It may be helpful to point out the obvious here.  There is a widely
> known mathematical notation for expressing iterative counting
> expressions.  It is the Greek letter Sigma.

I would disagree.  (big) Sigma is the Opeartor for Sum. It's the
indices on the (big) Sigma and on  the (big) Pi for product, and  on
the tensor form that indicate iterative expansion, to be done before
or while applying an operator. (Where Sigma is also the implied
operator not written on tensor forms.)  Sigma can also be applied to
an expression representing a sequence without indices, where summation
is over the whole sequence; some authors slavisly put in the i=0..inf
indicia in those cases, but need not be so, is not in treatments where
sequences become "1st class" objects of discourse.

Sigma is a reduction operator, it collapses a sequence to a number (or
term in presense of other variables).

Iteration is the production of sequences from a general term with a
variable taking integer values.

These are sadly confounded in elementary classes, as the general term
can represent the whole sequence as well as it's general term. Usage
is usually context sensitive.

> There is no corresponding
> widely used mathematical notation for expressing recursion.

Recursion in Maths are typically meta-math, reducing to the previously
proved lemma, and thus not used in "expressions", but used in
structuring proofs.

(There are obvious screaming exceptions, e.g. Factoria/Gamma,
Fibonacci, fixedpoint theory, etc.)

> (There do
> exist notations for it, but they are not nearly as widely used or
> understood.)

Recursion in progamming is more related to induction in proof anyway,
as you noted above. There's typically no *symbol* because it's the
*pattern* of the proof, not a bit of expression. Proofs might be
shorter if we had a symbols for tropes in proof, but ... wait, that's
Category Theory. (e.g., The Diagram Commutes, QED.)

> There are a number of reasons for this, but one of the
> major ones is that mathematicians find looping and iteration more
> straightforward concepts.

I wouldn't have said Classical *Mathematicians* "loop" and "iterate"
much; aside from fibonacci and factorial, they don't recursively
define functions much either. But they reduce to the previously solved
and use pigeon hole principale whenever they'll simplify a proof.

Combinatorists, sort of, number theorists some, I guess, but even they
don't (didn't used to, do they now?) call it loop and iterate. That's
for (theoretical) computer scientists to do.

[There are the occ. proofs-by-algorithm that loop bound and branch,
with built in proof of termination by reducing a measure, as a more
complex form of reverse induction.  ... The Euclidean Algorithm for
GCD is a classic here. These are delightfully constructive. ]

> Another tangential note.  It is important to distinguish between how
> straightforward a set of concepts is and how straightforward it is to
> express an idea using those concepts.

I deny the distinction, if "concept" includes comprehension in a
useful form.  If expressing an idea with a set of concepts isn't
straightforward, there's something wrong with the set of concepts, or
the choice of it as a basis for that class of  idea.

I would think expressivity is central to straightforwardness.

> For instance goto is
> conceptually very straightforward,

The immediate operational defintion of GOTO is straightforward, in assembler.

However, GOTO destroys any hope of a sane theory of a higher level
program, so I deny the concept is as straightforward in any modern
sense as was once thought.  Which is why Dijskstra considered it
harmful.

GOTO may be *forthright* in it's imperitive nature, more so than the
COME FROM of an exception (or ON CONDITION in the prior incarnation),
but it's not straightforward.  In any language with interesting stack
nature, how to implement and restrict GOTO is non-trivial; it can NOT
be straightforward as I grok the word.

(Except in Assembler & Machine code, where the Instruction Pointer is
a central semantic object, but that's not interesting here, I think we
can agree.)

> but ideas expressed with goto tend
> to be very obscure.

The Goto does not express an idea.
It expresses an immediate imperative that is only a piece of an
expression of an algorithm

Yes, it mangles the expression of that larger idea.

Goto considered harmful.
Anything that obscures the semantics considered harmful.
I think that's your point about exceptions (when used badly).


> > The beauty of Perl is that Larry has wrought a language in which you
> > can express things according to your simplicity, and those who see an
> > inner simplicity in the Lisp-inspired and APL-inspired dialects of
> > Perl can also happily use our simplicity.

> Agreed.

Amen.

Which brings us back to Guido imposing his simplicity on all Python-handlers.

As a devotee of all other things Monty Python, I'd really LIKE to like
a language named for them, but he's right, it really is about Spam,
Ham and Eggs ... Python the dialect is spam spam spam spam and more
spam, you have only one choice, Guido's menu is all spam all the time.

I just hope the final Python Perl joke isn't "This is a dead parrot".

[Exceptions %< snip]
> Right.  Depending on what kinds of programming you're
> doing, they may or may not be very useful for you.

Amen.
If it hurts when you do that, don't do that.
TIMTOWTDI. The Camel is lumpy for a reason.


> Funny, the things that I've seen good people complain about
> have to do with unexpected flow of control which
> programmers have not thought through.  This holds whether
> or not you're using objects or strings.

Very true. COME FROM is even more unpredictable than GOTO.
As implemented in PL/I ON CONDITION's and C traps,  the COME FROM was
highly unpredictable because it was global-scope, immediate, with
strange memory leaks and other action-at-a-distance due to the stack
not unwinding. Java (and recent C++) at least unwind the stack on
exceptions, an improvement. And the return-from-middle is at least to
the block surrounding the failed call, not to some global scope
function that registered a callback for an error it expected but
didn't get that isn't really expecting it now but never un-registered
since registration is global not lexical in C. Ugh. Yeah, I remember
those days, and know why they object to unexpected flow of control.

HOWEVER.

When using modern lexical try/catch with exceptions, you don't wind up
far far away for no apparent reason. You're either still in a block in
the call stack, or back at the command prompt. If you didn't do
try/catch, the called routine's "die" would have left you at the
command prompt anyway. No harm, no foul. If you lexically
try{something} catch{a class of OO exception sensibly tailored}, you
can't accidentally handle some exception other than the one you were
concerned with.

> But that said, I'm not a huge fan of exception objects.

Me neither. I don't like writing the sort of code that needs them.

>   One big
> reason is that exceptions are by nature code that is only run when
> things go wrong.  Programmers being programmers and human nature being
> human nature,

Indeed. I say often you have to be an optimist to be in this business.
If we didn't truly BELIEVE the next compile will be the one that
works, we'd go crazy. (Hmm, maybe we ARE crazy.)

> this is the part of your codebase that is least likely
> to be tested or debugged.
> And is therefore the most likely to be
> faulty.

If needing to use  Exceptions, you need to be using very best
practices in testing, and a formal proof that the exceptions won't
retry forever under any condition might be good too.

> The one thing that I don't want to have happen is for things
> to go further wrong after they've already gone wrong.

Unless it's very very obvious. The worst thing is usually to print a
wrong answer instead of halting with error, except when halting is the
worst thing.

> It seems to me that exception objects encourage more complexity around
> error handling, not less.

Misuse of exception objects encourages fuzzy thinking, yes.
Misuse of a lot of featuers encoursages fuzzy thinking.
This does not make Regexs bad.

> More complexity means more room for
> mistakes, which is the opposite of what I want.

Guido seems to like to limit the scope for errors in taste.
TIMTOWDI.
You have the choice to use exception objects or strings; you have the
choice to try/catch or just let it die.

> And an important
> special case is when the exception is thrown because Perl is out of
> resources.  In that case doing something complex is not only unwise -
> it may be impossible!

However, if you try to do something fancy, it will re-throw
effectively the same exception from the new location ... so no harm.
You'll eventually get the message that you blew the heap.


> Furthermore when you have exception objects then you widen the debate
> about using exceptions for normal flow of control.

No debate. Exceptions for "normal" flow is heresey.
TIMTOWDI, Saint Larry won't prevent someone from using
      die new NextState($next);
to return to the main event loop, but you and I can declare a truce
long enough to join forces to tar and feather them, unless they were
using it to implement BrainF*ck or something in Acme:: namespace.

> My attitude is
> that if you use exceptions for normal flow of control, then what do
> you do in truly exceptional circumstances?

Well, if someone were so silly, they's use
     exit()
although we have traps for that too ...

> So I'm not a fan of
> encouraging the widespread use of exceptions for normal flow of
> control.

Agreed. Exceptions are named correctly.

If you want non-local normal flow of control, we've got co-routines,
continuations, and closures for that.

[%<snip]

> This I don't care much about because my attitude is, "pick an API and
> stick to it."  However for the return code camp, I have to ask how
> many applications they have seen that will correctly handle, say,
> EAGAIN.

:-)


> Um, you misunderstood what I meant by "robust".
> I don't  mean by robust a program which always does
> the right thing no matter what.  I
> mean a program that does its darndest to keep on
>  working no matter  what.

Oh yes, I understood. My mention of airforce should have tipped you
...  crash the onboard realtime control law unit and crash the
airframe. There's a reason my clients back in the Cold War paid me to
learn about automating code proof of correctness, and it wasn't just
Orange Book A1 Bell & Lapadula model proofs, dry and dull and useless
as those are.

> Most of the programs in the Microsoft Office suite are
> fairly robust.

Yeah, they haven't locked up or cored often lately.

IE on the other hand, has to get shut down with force frequently.

> Firefox is an example of one that isn't - it dumps core pretty
> easily.

For this class of program, I'd rather it core than hang like IE.
Funny, I haven't had FireFox core on me. But then, I'm running FireFox
on a real OS and IE on Windows.

> Whether this characteristic is good or bad is arguable,

I'd say it's a fundamental question of requirements to decide early
on, what the prefered failure modes are! What's right for one app is
wrong for another. GUI, Realtime, server ... if it's a daemon, if it
cores, init.d can spawn another ... and we can read the last core in
our copious spare time. If it's a PIC controling an acrobatic robot, a
self-administered hard-reset could be disastrous, or might be an
acceptable alternative to locking up, if the analog electronics around
the PIC will keep it moving long enough for the PIC to reboot and
figure out what to do next.

Failing gracefully, allowing you a final save before disaster, is a
good thing if it doesn't damage the data worse than coring would have.
Going gray for 5 minutes while it sorts out what if anything it wants
to display is not entirely robust according to the Windows event loop
rules, but hey, it's what windows users are used to.

> if robustness is a design goal,
> then exceptions make your life a lot
> harder.

Well, that depends on the style of robustness and the style of
solution. Robustness is sometimes implemented by "detect and retry",
which can be implemented by exceptions.

>  (Raymond Chen's writings on this issue assume that robustness
> is a design goal.)

It's a good goal, and one that doesn't seem to be considered often
enough, so it's good for him to write about it, but it's not the only
possible goal (TIMTOWTDI) ... and some possible goals are incompatible
with robustness in A PARTICULAR  program. Sometimes the system's
robustness comes from a nanny process or from N+3 redundancy, not from
each node trying it's hardest to cling to life when totally confused.

> > Exceptions implemented as old-style trap-this-if-found-call-that are
> > the dual of the dreaded GOTO, they're basically  a COME FROM, with all
> > the problems of stack semnntics damage and the problems of action at a
> > distance in the code.

> Yup.

So we agree that those kinds of exceptions should be reserved for
"last one out, please kill the lights" kinds of traps, to release the
resources that exit() doesn't release.

> > Try-Catch semantics control the stack semantics and localize the effects.

> Nope.

It does localize compared to trap global COME FROM.

> The code at points in the stack between where the exception is thrown
> and caught may have to worry about at what point control may terminate

No, if you don't catch it, you're just unwinding. Your destructors get
called, same as on program termination. Bye !

> as I've been pointing out, particularly important for robust
> applications that are trying to make sure that objects are either not
> or else fully instantiated.

Yes, that destructor may see  incompoletely-initialized objects is a
case where the mere existance of exceptions puts a tax on all
destructors.

Robust applications don't have null pointers and use them without
testing, yes; and don't follow pointers to freespace either.
Pointerless languages have a lot less problem with that than C++,
where a lot of the Exception flamewar was foungt. (Java has it's own
issues, don't get me started.)

Nice constructors/destructors/classes don't do that.


This is not the best reason but is possibly the most compelling for
not doing more work than necesasry in constructors and destructors,
deferring real work to an initializer. (Like wise, Scripts and Modules
 are better off using INIT{} and CHEC{} than BEGIN{}, if they can
require a later perl rev.

A good design pattern is to defer anything that can THROW into a
post-contruction explicit initialization call that can return a real
return code. (This is not news. SIGOOPS discussed this 20 years ago,
back when the war was between C++, Objective-C, SmallTalk, and
whatever CLOS was called that week. But it gets forgotten every 5
years, it seems.)


> Yes, you can do this.  In fact I do something like this in mod_perl.
> However "evaluate what's working and what isn't" is far easier said
> than done.

Oh yes. But "autonomics" is where the industry is heading.

It may mean sending the error out on SMNP to HPOV to be correlated
with the other subsystem errors, flush the request, and take the next
request from the queue.

It may mean push setbrk to grab more heap and try again setting the
depth of recursion to 2N, and NICE to 2N also.

And if it doesn't mean anything then THIS program doesn't have this option.

> The problem being that unless the top control loop is
> virtually omniscient

I do not intend to create HAL ...

> about the internal state of everything in your
> program, it can't know what things are, say, halfway initialized but
> not really working.

NOTHING SHOULD EVER BE HALF INITIALIZED.
EVER. NOTHING.

An object capable of being half initialized is broken.

Von Neuman said anyone using  deterministic psuedo random numbers was
in a state of sin. I feel similarly about "half initialized objects".
Just say no.

Recode it so it has a useful sane state of "prepped but not ready" or
"ready for reuse" or something. The Object should introspect its own
unreadiness; no need for the system to know about it.

Now that we've dispensed with undead objects ... it is the duty of the
failing component to throw an interesting Exception object, and of the
catcher to make some sense of it.

> And if you choose to make it omniscient,

No no a thousand times no

> That's not a problem in mod_perl where it suffices to log a message,
> return an internal server error, then go to serve the next page
> request.

Exactly the sort of system that can benefit.

> But it is in an interactive GUI application where you now
> might have, say, a halfway present modal window that blocks all
> further interactions.

(1) Modal windows are evile.
(2) Building modal windows in constructors that throw exceptions isn't
evil, it's just stupid.
Yeah, kind of hard to do that robustly.
That's not the exception's fault.

If it hurts when you do that, don't do that.

If it hurts YOU when some other programmer does that, smack him. Hard.

> As a general principle there are two reasonable ways that I know of to
> handle errors.  The first is to deal with them as close to the error
> as possible when you have as much important context as possible.

When possible, this works great.

> The
> second is to do something very generic.

When satisfactory, and when #1 above isn't, great.

> Neither approach is clearly > right.
> Instead they are good for different situations.

Right, neither *always* fits. TIMTOWDI is required.

And sometimes neither fits.
Then you need to redesign your solution.
Sometimes adding fancy OO exceptions  and try-catch will solve it.
Sometimes making modal windows non-modal and splitting constructors
into constructors and initializers, although a pain for existing code
and bloating the API, will solve it.
Sometimes you need to can the whole ball of spaghetti and think
ravioli or lasagna ... or baked alaska.

Cheers,

-- 
Bill
[EMAIL PROTECTED] [EMAIL PROTECTED]
 
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to