When we speak of separating meaning from optimization, I get the impression
we want to automate the optimization. In that case, we should validate the
optimizer(s). But you seem to be assuming hand-optimized code with a
(simplified) reference implementation. That's a pretty good pattern for
validation and debugging, and I've seen it used several times (most
recently in a library called 'reactive banana', and most systematically in
Goguen's BOBJ). But I've never seen it used as a fallback. (I have heard of
cases of systematically running multiple implementations and comparing them
for agreement, e.g. with airbus.)




On Tue, Jul 30, 2013 at 6:08 PM, Casey Ransberger
<[email protected]>wrote:

> Hi David, comments below.
>
> On Jul 30, 2013, at 5:46 PM, David Barbour <[email protected]> wrote:
>
> > I'm confused about what you're asking. If you apply an optimizer to an
> algorithm, it absolutely shouldn't affect the output. When we debug or
> report errors, it should always be in reference to the original source code.
> >
> > Or do you mean some other form of 'optimized'? I might rephrase your
> question in terms of 'levels of service' and graceful degradation (e.g.
> switching from video conferencing to teleconferencing gracefully if it
> turns out the video uses too much bandwidth), then there's a lot of
> research there. One course I took - survivable networks and systems -
> heavily explored that subject, along with resilience. Resilience involves
> quickly recovering to a better level of service once the cause for the
> fault is removed (e.g. restoring the video once the bandwidth is available).
> >
> > Achieving ability to "fall back" gracefully can be a challenge. Things
> can go wrong many more ways than they can go right. Things can break in
> many more ways than they can be whole. A major issue is 'partial failure' -
> because partial failure means partial success. Often some state has been
> changed before the failure occurs. It can be difficult to undo those
> changes.
>
> So I'm talking about the steps goal around separating meaning from
> optimization. The reason we don't have to count optimizations as a part of
> our "complexity count" is: the system can work flawlessly without them, it
> might just need a lot of RAM and a crazy fast CPU(s) to do it.
>
> The thought I had was just this: complex algs tend to have bugs more often
> than simple algs. So why not fail over if you have an "architecture" that
> always contains the simplest solution to the problem right *beside* the
> optimal solution that makes it feasible on current hardware?
>
> Of course you're right, I can imagine there being lots of practical
> problems with doing things this way, especially if you've got an e.g.
> Google sized fleet of machines all switching to a bubble sort at the same
> time! But it's still an interesting question to my mind, because one of the
> most fundamental questions I can ask is "how do I make these systems less
> brittle?"
>
> Does this clarify what I was asking about?
>
> Case
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to