On Mar 29, 2009, at 10:24 AM, Michele Simionato wrote:

Uhm ... But what's the rationale for deferring the error so
late?

It is consistent with what you'd expect from a dynamically
typed language with eager, call-by-value evaluation.

Languages with static type systems (ML, Haskell, etc.)
define a set of rules (e.g., a Hindley Milner type system)
through which they distinguish valid programs that the
compiler accepts and invalid programs that the compiler
rejects.  Scheme has no such system, and thus has no
rules to distinguish valid and invalid programs.  (BTW,
I don't think any standard type system does arithmetic
analysis that allows it to reject a division-by-zero
error such as the one under discussion here.)

Having an eager, call-by-value evaluation semantics means
that the implementation does not have total freedom in
reordering the evaluation of expression such as to lift
the division-by-zero error to the top of the program.
For example, the compiler is not allowed to rewrite
  (lambda () (/ 1 0))
into
  (let ([tmp (/ 1 0)]) (lambda () tmp))
Doing otherwise would be confusing.  Why?  Because even
a freshman student is taught that lambda only creates
a procedure and does not evaluate its body.  Getting
a division-by-zero error in that case means that the
body was evaluated and there goes the axiom.

Should the standard allow smart compilers, able to
evaluate at compile time (some of) the definitions
to reject invalid programs or not?

The standard only binds implementations in R6RS mode.
So, an implementation can (in principle) provide a
mode in which this happens.  The question is whether
this is a good idea.

Contrived example:  what would a smart compiler do
when presented with (procedure? (lambda () (/ 1 0)))?

On the one hand, the smart compiler might reduce that
whole expression to #t for the obvious reason.  On the
other hand, it might reject the whole program since it
contains a division-by-zero error.  What should it do?
Without a [programmer-]predictable system for deciding
what to take and what to reject, the utility of this
check quickly becomes a hinderance.

In the case of the zero-division-error we are discussing
there is no way this can ever be a valid program, so
why to make acceptable something which will never run?

In addition to the example above, the programmer might
want to write a "check-error" procedure that takes a
thunk and applies it from within an exception handler
and asserting that it does, indeed, raise an error.
One good test for that procedure is to make sure that
it works with (lambda () (/ 1 0)).  When the system
rejects the program, you'd be forced to resort to
obfuscation to fool the system and make it accept the
input program.

To be clear: I want to understand if the current
behavior is the way it is just for performance
reasons, for simplicity reasons (we don't want
to require too smart compilers), or if there is
something else I am missing.

I don't think it's the simplicity of the implementation
that's the big factor here.  It's the simplicity of the
evaluation model that would be affected if the compiler
becomes too smart (i.e., too smart for its own good).
You don't want a compiler that's too smart for this.
You probably want a compiler that has consistent and
predictable behavior when presented with "incorrect"
programs (after you define the meaning of "incorrect"
programs :-)).

Aziz,,,


Reply via email to