> It seems that you could work on all the tables simultaneously, in a
> time-sliced sort of way, and bound your worst case to N times the
> ideal case.  If you do the same amount of work per time unit on each
> pair of tables, then you'll finish up the easiest pair first, without
> having to know which one it will be ahead of time.  From there the
> rest of the work becomes easier.

Over at IDSIA, they'e working on something
like this for AI: the general idea is that
if one is looking for a good "predictor",
and is confident that the problem one has
is better described by code than data, it
should be possible to try interleaving all
possible predictors, to find a "short" one
with some acceptable overhead.

-Dave

Reply via email to