Mike,

You are talking about two different occurrences of a computational explosion here, so we need to distinguish them.

One is a computational explosion that occurs at design time: this is when a researcher gets an algorithm to do something on a "toy" problem, but then they figure out how the algorithm scales when it is scaled up to a full size problem and discover that it will just need too much computing power. This explosion doesn't happen in the AGI, it happens in the calculations done by the AGI designer.

The second type of explosion might occur in an actual working system (although strictly speaking this would not be called a "computational explosion" so much as a "screw up"). If some AGI designer inserts an algorithm that, say, requires the system to engage in an (almost) infinitely long calculation to make a decision at some point, and if the programmer allows the system to start this calculation and then wait for it to end, then the system will hang.

AI and Cog Sci have not been "obsessed" with computational explosions: it is just a fact that any model that suffers from one is dumb, and there are many that do.

They have no connection to "rational" algorithms. Can happen in any kind of systems. (Happens in Microsoft Windows all the time, and if that's rational I'll eat the entire town of Redmond, WA.)

it is certainly true that some style of computation are more prone to hanging that others. But really it is pretty straightforward matter to write algorithms in such a way that this is not a problem: it may slow some algorithms down a bit, but that is not a fundamental issue.

For what it's worth, my system does indeed stay well away from situations in which it might get locked up. It is always happy to stop what it's doing and go for a drink.

But remember, all this is about "hanging" or "livelock", not about the design problem.


Richard Loosemore





Mike Tintner wrote:
Thanks. But one way and another, although there are different variations, cog sci and AI have been obsessed with computational explosions? Ultimately, it seems to me, these are all the problems of algorithms - of a rigid, rational approach and system - which inevitably get stuck in dealing with real world situations, that don't fit or are too computationally demanding for their models. (And can you *guarantee* that your particular "complex" approach isn't going to run into its own explosions?)

These explosions never occur, surely, in the human brain. For at least two reasons.

Crucially, the brain has a self which can stop any computation or train of thought and say: bugger this - what's the point? - I'm off for a drink. An essential function. In all seriousness.

Secondly, the brain doesn't follow closed algorithms, anyway, as we were discussing. And it doesn't have a single but rather always has conflicting models. (I can't remember whether it was John or s.o. else recently who said "I've learned that I can live with conflicting models/worldviews").


Richard: Mike Tintner wrote:
Essentially, Richard & others are replaying the same old problems of computational explosions - see "computational complexity" in this history of cog. sci. review - no?

No:  this is a misunderstanding of "complexity" unfortunately (cf the
footnote on p1 of my AGIRI paper):  computational complexity refers to
how computations scale up, which is not at all the same as the
"complexity" issue, which is about whether or not a particular system
can be explained.

To see the difference, imagine an algorithm that was good enough to be
intelligent, but scaling it up to the size necessary for human-level
intelligence would require a computer the size of a galaxy.  Nothing
wrong with the algorithm, and maybe with a quantum computer it would
actually work.  This algorithm would be suffering from a computational
complexity problem.

By contrast, there might be proposed algorithms for iimplementing a
human-level intelligence which will never work, no matter how much they
are scaled up (indeed, they may actually deteriorate as they are scaled
up).  If this was happening because the designers were not appreciating
that they needed to make subtle and completely non-obvious changes in
the algorithm, to get its high-level behavior to be what they wanted it
to be, and if this were because intelligence requires
complexity-generating processes inside the system, then this would be a
complex systems problem.

Two completely different issues.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74586829-bb45d1

Reply via email to