Darren New wrote:
Christopher Smith wrote:
Um, well, sure you do. If I get a division by zero error, I'm pretty
sure my introspection is still going to work.
Not at all. You have two choices with division by zero errors: either
you have a logical problem in your code (in which case you have a
logical error in your code, a.k.a. a bug, and dumping core is a very
good way to help you understand the problem) or somehow memory has
become corrupted (in which case you have an unexpected error). In the
latter case you can't trust that introspection works.
Right. Note that I'm talking about SAFE LANGUAGES. :-) The last
sentence there is exactly why I dislike unsafe languages.
Yes, the raw power of being able to say, "well, the language says it
shouldn't" tends to be helpful when you are busily erasing millions of
dollars worth of transactions, or perhaps are a navy ship without
navigation at sea. ;-)
And no, dumping core is entirely inappropriate in many circumstances.
It's the cause of all kinds of messiness in a lot of system services
that wind up having to fork off children just in case some piece of
code dumps core.
First, if we're using an imaginary piece of perfect hardware with a
perfect OS with perfect drivers on a perfect runtime, then we can assume
that we've made a logical error here, and you very much want to dump
core and stop doing anything so that the error can be corrected or
avoided. Let's just say for a second you did catch the error, how would
you suggest you proceed at this point, if you were again using this
imaginary system and you did catch a divide by zero error? Would you
somehow know, despite the unexpected nature of the problem, which
magical variable needs to be changed to some other value than zero?
Would you write code that interprets the stack trace, finds the variable
that shouldn't be zero, and replaces it with some other "of course
correct" value? This seems unlikely, at least for the case of an
*unexpected* error. Given that you didn't anticipate this problem, how
do you know that your fix isn't going to do more harm than good?
Secondly, I don't find having to fork off children causes all kinds of
messiness. Indeed, it tends to be a lot less messy than the alternative.
Beyond the import factor that I've highlight, what is the huge
difference between unwinding a stack until some generalized catch block
is found that somehow tries to deal with the problem vs. a process dying
and some parent process with a generalized SIG_CHILD handler somehow
tries to deal with the problem?
Now, of course, there can be bugs in the implementation of any
language, or hardware failures you can't detect. But we're talking
languages here.
I guess I could see where this would be a meaningful point if my
programming wasn't stuck with dealing with reality.
It's like security. It's a matter of degree. Using a safe language to
start with eliminates whole swaths of problems. It doesn't eliminate
them all.
C++ makes it entirely possible for you to do things without doing
anything unspecified. If you want, you could add a little lint check in
your build process that verifies that you always use smart pointers,
bounds checkers, etc. I guess the difference is that a C++ developer
would still recognize that the platform he/she is running on isn't
perfect, and so when they see an unexpected error, the best strategy is
to get out of dodge.
> No matter how much I
chant that Java is a "safe" language, it proves to be painfully
difficult to work around leaks in java.util.Vector, memory corruption
in the AWT, little buffer overruns in a native JDBC driver, or my
personal favourite: JVM's that crash when they encounter byte codes
that they determine can never be reached.
And how is this worse than similar problems in an unsafe language? How
do you work around buffer overruns in any other language's system
libraries?
Note that buffer overruns in native JDBC libraries are caused by using
an unsafe language, ya know. Probably the same is true of the
problems in java.util.Vector and in AWT. Had all these things actually
been written in Java, you'd not be seeing those problems. The last one
is an implementation fault (and probably an intentional one for
speed-over-correctness reasons), which you should fix at the compiler
level.
Hehe. Yeah, I remember reading that Java had no undefined behavior
when I first learned the language.
"It is up to each collection to determine its own synchronization
policy. In the absence of a stronger guarantee by the implementation,
undefined behavior may result from the invocation of any method on a
collection that is being mutated by another thread; this includes
direct invocations, passing the collection to a method that might
perform invocations, and using an existing iterator to examine the
collection."
Yes? So? That's not saying the language has undefined behavior. It's
saying you don't know how the implementation is written.
No, that would be *unspecified* behavior, which is quite different. They
used the term *unspecified*, and it really is *unspecified*.
"When width is less than zero, Rectangle behavior is undefined.
Again, this isn't undefined behavior in Java. This is "we're not
restricting how people implement Rectangle" undefined behavior. I.e.,
this isn't "Rectangle may branch into the middle of your video memory"
undefined behavior. The insides of Rectangle still have to follow the
rules of the Java language (assuming Rectangle is implemented in Java).
...and that's an assumption that you can't make and exactly why it is
"undefined" rather than "unspecified".
Sadly, few languages actually define the difference. Ada is one of them.
The difference between unspecified and undefined is pretty well
understood by language designers.
--Chris
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg