On 11/5/06, Bill Ricker <[EMAIL PROTECTED]> wrote:
> > I am a firm believer in expressing things in the most straightforward
> > way possible.  Most people find loops straightforward, so I'm happy to
> > use loops unless I have a good reason not to.
>
> Most *people* find loops and all other programming constructs weird and 
> opaque.

Well if you get down to brass tacks, most people are functionally
illiterate.  At least in Canada, which is the only country that I know
dares try measuring such stuff.  (Canada's definition of "functionally
illiterate" is "can't read at a grade 8 level".  Which is supposed to
be the level that, for instance, newspapers aim for.  Based on
personal experience, I'd say that the USA does a worse job of
education than Canada...)

> Most *programmers* first programming language used if-else and some
> do-loop-thingy in the introductory section so *most* programmers find
> loops simpler than recursion or 2nd order operators, since they
> weren't until the advanced chapters of the book, if they were even
> included at all. Is this why Dijkstra said BASIC was a mind-crippling
> affliction?

There are so many reasons why Dijkstra could have said it that there
is no point in guessing which specific items motivated him.

> A mathematician would usually rather use a reduction to a previously
> solved problem than a counting argument.

Sorry, this is BS.  I am speaking here as an almost mathematician. (I
came about a month from finishing my PhD then encountered a need for
money...)

Mathematicians are happy to use any technique they can.  But counting
arguments are often preferred because they tend to be more
straightforward, and they tend to be more informative.  By more
informative I mean that the counting argument often gives you
something which can be used to produce either more precise or else
follow-up results.

Reduction to a previously solved problem is used a lot simply because
it is a more powerful technique.  When that allows more elegant
solutions, that can be a big win.  However not always.  In fact there
is a whole branch of mathematics devoted to nothing else.  (It is
called combinatorics.)

On a side note, I remember being part of an interesting conversation
on why students seem to find it easier to learn induction than
recursion when they're almost identical.  Two big parts of the answer
seem to be that induction is somewhat simpler in form, and the
presentation has a more linear flow.  People seem to have a hangup
when reading code that is going to be executed multiple times.  It is
more natural to say, "Here is 1.  OK, based on 1, here is 2.  Based on
2, here is 3.  And so on."  Also recursion tends to be more
complicated.  For instance virtually no elementary math proofs have
multiple base cases, but this is fairly common in recursive
algorithms.

It may be helpful to point out the obvious here.  There is a widely
known mathematical notation for expressing iterative counting
expressions.  It is the Greek letter Sigma.  There is no corresponding
widely used mathematical notation for expressing recursion.  (There do
exist notations for it, but they are not nearly as widely used or
understood.)  There are a number of reasons for this, but one of the
major ones is that mathematicians find looping and iteration more
straightforward concepts.

Another tangential note.  It is important to distinguish between how
straightforward a set of concepts is and how straightforward it is to
express an idea using those concepts.  For instance goto is
conceptually very straightforward, but ideas expressed with goto tend
to be very obscure.

> The beauty of Perl is that Larry has wrought a language in which you
> can express things according to your simplicity, and those who see an
> inner simplicity in the Lisp-inspired and APL-inspired dialects of
> Perl can also happily use our simplicity.

Agreed.

> > There are actually a lot of very prominent programmers who strongly
> > dislike exceptions.
>
> There are some very prominent programmers who strongly like Java too.
>
> There are many valid reasons to avoid using exceptions in a given program.
> There are also valid reasons to use them.
> TIMTOWDI.

Right.  Depending on what kinds of programming you're doing, they may
or may not be very useful for you.

> Many of the reasons to avoid using exceptions are throw-backs to the
> bad "it's a character string" exception of early Perl, Python, C++
> exceptions. Fully wrought exception objects (which, Guido didn't seem
> to realize, could have a stringify operator to work compatibly with
> non-updated code!) address many of old issues.

Funny, the things that I've seen good people complain about have to do
with unexpected flow of control which programmers have not thought
through.  This holds whether or not you're using objects or strings.

But that said, I'm not a huge fan of exception objects.  One big
reason is that exceptions are by nature code that is only run when
things go wrong.  Programmers being programmers and human nature being
human nature, this is the part of your codebase that is least likely
to be tested or debugged.  And is therefore the most likely to be
faulty.  The one thing that I don't want to have happen is for things
to go further wrong after they've already gone wrong.

It seems to me that exception objects encourage more complexity around
error handling, not less.  More complexity means more room for
mistakes, which is the opposite of what I want.  And an important
special case is when the exception is thrown because Perl is out of
resources.  In that case doing something complex is not only unwise -
it may be impossible!

Furthermore when you have exception objects then you widen the debate
about using exceptions for normal flow of control.  My attitude is
that if you use exceptions for normal flow of control, then what do
you do in truly exceptional circumstances?  So I'm not a fan of
encouraging the widespread use of exceptions for normal flow of
control.

> Exceptions do make life harder for package authors who want to support
> both callers who like exceptions and callers who don't but prefer to
> attempt to actually check all the function return codes. (Professor
> said to do that, but never did in the examples on the board, so few
> students really get the habit, alas.)

This I don't care much about because my attitude is, "pick an API and
stick to it."  However for the return code camp, I have to ask how
many applications they have seen that will correctly handle, say,
EAGAIN.

> > some kinds of programming they have a point.  If your goal is to write
> > a robust program (ie one that will keep on running and doing the best
> > it can even though things are going to pot around and inside it) then
> > exceptions make your life very, very hard.
>
> Depends how you handle Exceptions If things are going to pot,
> exceptions can be used to focus the system on the survival priorities.
> Conversely, a single-threaded system trying to execute a control-law
> in a fixed time cycle would have a problem with exceptions (or garbage
> collection), but that's an obsolete criterion - even the airforce can
> afford multithreadable processors now.

Um, you misunderstood what I meant by "robust".  I don't mean by
robust a program which always does the right thing no matter what.  I
mean a program that does its darndest to keep on working no matter
what.

Most of the programs in the Microsoft Office suite are fairly robust.
They can run into all sorts of internal programs yet avoid dumping
core.  Firefox is an example of one that isn't - it dumps core pretty
easily.  Whether this characteristic is good or bad is arguable, but
if robustness is a design goal, then exceptions make your life a lot
harder.  (Raymond Chen's writings on this issue assume that robustness
is a design goal.)

> Exceptions implemented as old-style trap-this-if-found-call-that are
> the dual of the dreaded GOTO, they're basically  a COME FROM, with all
> the problems of stack semnntics damage and the problems of action at a
> distance in the code.

Yup.

> Try-Catch semantics control the stack semantics and localize the effects.

Nope.

The code at points in the stack between where the exception is thrown
and caught may have to worry about at what point control may terminate
unexpectedly because exceptions were thrown and not caught.  This is,
as I've been pointing out, particularly important for robust
applications that are trying to make sure that objects are either not
or else fully instantiated.

> > But in other kinds of
> > programming it is perfectly fine to terminate the program with an
> > exception, have the programmer see the error message, and fix it
> > properly.
>
> Exceptions do not require termination. The program can catch
> EVERYthing at top control loop and evaluate what's working and what
> isn't. There's more than one way to skin a camel.

Yes, you can do this.  In fact I do something like this in mod_perl.
However "evaluate what's working and what isn't" is far easier said
than done.  The problem being that unless the top control loop is
virtually omniscient about the internal state of everything in your
program, it can't know what things are, say, halfway initialized but
not really working.  And if you choose to make it omniscient, that's
repeated information that is bound to become a maintainance issue.

That's not a problem in mod_perl where it suffices to log a message,
return an internal server error, then go to serve the next page
request.  But it is in an interactive GUI application where you now
might have, say, a halfway present modal window that blocks all
further interactions.

As a general principle there are two reasonable ways that I know of to
handle errors.  The first is to deal with them as close to the error
as possible when you have as much important context as possible.  The
second is to do something very generic.  Neither approach is clearly
right.  Instead they are good for different situations.

Cheers,
Ben
 
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to