On Wed, Jul 31, 2013 at 3:13 PM, Ondřej Bílka <[email protected]> wrote:
> On Wed, Jul 31, 2013 at 02:52:39PM -0300, Fernando Cacciola wrote: > > On Tue, Jul 30, 2013 at 5:22 PM, Casey Ransberger > > <[1][email protected]> wrote: > > > > Thought I had: when a program hits an unhandled exception, we crash, > > often there's a hook to log the crash somewhere. > > > > I was thinking: if a system happens to be running an optimized > version > > of some algorithm, and hit a crash bug, what if it could fall back > to > > the suboptimal but conceptually simpler "Occam's explanation?" > > > > All other things being equal, the simple implementation is usually > more > > stable than the faster/less-RAM solution. > > > > Is anyone aware of research in this direction? > > > > If you are referring to the internal dynamics of an automated > optimizer, > > such as that of a compiler or a VM, I would consider such an approach > > quite reasonable and I would even expect some moden VMs to do so > already. > > > > However, from a general engineering POV, I doubt this has any > noticeably > > impact on the defect rate of software. > > > It has as there is 'you can solve any problem by backtracking' mantra. > > When a problem is algorithmic in nature (and not update following fields > according to bussiness rules one) then you typically check program with > consistency test that runs conceptually simpler (but slower) algorithm. > > Fair enough, but selecting between a complex vs a simple algorithm is not IMO the same as selecting between an optimized vs non-optimized implementation. In fact, the chances of issues in the complex algorithm are more likely to be found in the algorithm itself than in its implementation. Anyway, if the approach Casey suggested was in fact meant to select between algorithms as opposed to implementations, then indeed I agree. In fact, what I mentioned about the technique used in geometric computing might very well be considered an example of us tthat (with the software emulation of error-free numeric operations being the simpler but slower algorithm) It is possible to generate this scheme automatically for algs What can be done automatically for algs exactly? Can you elaborate? While I can think of several slower-but-simpler algos avaiable for many existing problems, I can't think of any way to automatically create one from the other, so the programmer would have to provide the implementation of both algorithms manually. > . that does not do I/O, > you just replace memory access with one that keeps journal, on error rewind > journal and run simpler algorithm. > > Fully automatic backtracking would be an inmensely powerful tool for algos indeed, but I am not sure that "just replace memory access with a journal" is enough. It's part of the process for sure but the hard part is doing it such that the "end user programmer" can still express his algorithm naturally. I happen to work with algorithms and I had to instrument backtacking manually more times that I want to remember. And I failed to properly implement it even more often, with the fatal result of the program totally failing to be robust even with sound theoretical error responses. Backtracking is difficult :) so I would love to see this. But let's keep in mind that backtracking and the approach Casey posted is quite different. Unless of course you take his description as part of the instrumentation of some automated backtracking solution. Doing this with hardware transactional memory could decrease overhead. > > Indeed. > IME, most defects are in the (ideally high-level) user code and not in the > effective code produced by a compiler and/or VM. In the case of C++ > programming, for example, you typically turn off all optimizations during > development so it's fairly easy to spot optimization bugs, yet I've seen > very, very few cases over the years. > Then you work on quite small programs. A c++ easily gets 20 times slower > due lack of inlining between O0/O1. > Thats's right, but I think you misunderstood me. I never said that debug/release configurations where equally performant. I said that since we can run debug, we can detect an optimizer bug when we switch to release way easier than in other languages where the optmizer is always there. Or maybe you meant to say that we just can't use debug at all due to the giant slowdown? That's right of course, I totally hate it when I have (mostly had) no choice. But however the size of the program, one always can (or should be able to) run just parts of it, and these days, that's quite often the case, for example during (unit) testing. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
