On Tue, Jul 30, 2013 at 5:22 PM, Casey Ransberger
<[email protected]>wrote:

> Thought I had: when a program hits an unhandled exception, we crash, often
> there's a hook to log the crash somewhere.
>
> I was thinking: if a system happens to be running an optimized version of
> some algorithm, and hit a crash bug, what if it could fall back to the
> suboptimal but conceptually simpler "Occam's explanation?"
>
> All other things being equal, the simple implementation is usually more
> stable than the faster/less-RAM solution.
>
> Is anyone aware of research in this direction?
>

If you are referring to the internal dynamics of an automated optimizer,
such as that of a compiler or a VM, I would consider such an approach quite
reasonable and I would even expect some moden VMs to do so already.

However, from a general engineering POV, I doubt this has any noticeably
impact on the defect rate of software.

IME, most defects are in the (ideally high-level) user code and not in the
effective code produced by a compiler and/or VM. In the case of C++
programming, for example, you typically turn off all optimizations during
development so it's fairly easy to spot optimization bugs, yet I've seen
very, very few cases over the years.

OTOH, if you are referring to such an approach at the user level, then I
doubt it is sufficiently practical because I don't think the defect rate is
sufficiently proportional to the optimization level.

It is, however, (IMO), (inversely) proportional to the "abstraction level":
i.e. the lower-level the code the higher the defect rate. But in this case,
as Smalltalk had always tried to show, higher-level source code does not
neccesarily means less optimized. In fact, even in a non-VM language, it is
possible (altough not simple) to use very high-level expressions without
sacrificing performance at all, or significantly at least (this is
specially true for C++ but it is for any other language).
I consider that the proper engineering approach would not be to have a
low-level implementation as a fast-choice and a high-level counterpart as a
fallback. Instead, source code should only be as high-level as possible but
using an abstraction procedure properly engineered to avoid performance
penalties (I personally argue that the so-called abstraction penalty is
more often than not, a side effect of using the wrong abstraction procedure)

Having said all that, the general idea of a slower-but-safe fallback is
currently in use in at least one area:

Within a certain application domain (geometric computing), we use the
following technique: while doing fast floating-point computations, when the
error is detected to alter the expected behavior, the computation is
restarted but using some software based numeric type with whatever
properties are required to guarantee correct results (such as infinite
precision, or algebraic reduction capabilities, etc...)

Best





-- 
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to