I want to strongly agree with Richard on several points here, and
expand on them a bit in light of later discussion.

On 10/20/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
It used to be a standing joke in AI that researchers would claim there
was nothing wrong with their basic approach, they just needed more
computing power to make it work.  That was two decades ago:  has this
lesson been forgotten already?

This is very true then, and continues to be now. For those who use the
explanation of insufficient computing power, I would question what
approaches you would expect to be viable at higher computing power?
How do they scale? Why would they work better with more computation?

Relatedly, very very few AI research programmes operate in strict real
time. Many use batch processes, or virtual worlds, or automated
interaction scripts. It would be trivial to modify these systems to
behave as if they had 10 times as much computational power, or a
thousand times. Even if it took 1,000,000 seconds(11 1/2 days) for
every second of intelligent behavior with currently available
computing power, the results would be worth it, and unmistakeable, if
true.

I suspect that this would not work, as simply increasing computing
power would not validate current AI systems.

A completely spurious argument.  You would not necessarily *need* to
"simulate or predict" the AI, because the kind of "simulation" and
"prediction" you are talking about is low-level, exact state prediction
(this is inherent in the nature of proofs about Kolmogorov complexity).

This very important, and I strongly agree that "analysis" of this kind
is unhelpful. It's easy to show that heat engines and turbines and all
sorts of things are so insanely complex that they can't possibly be
modeled in the general case. But we needn't do so. We are interested
in the behavior of certain parameters of such systems, and we can
reduce the space of the systems we investigate(very few people build
turbines with disconnected parts, or assymetrical rotation, for
example).

It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you
sure can predict such overall characteristics as temperature, pressure
and volume.

This is the only claim in this message I have any disagreement with
(which must be some sort of record given my poor history with
Richard). I agree that its true in principle that AIs can be made this
way, but I'm not yet convinced that it's possible in practice.

It may be that the goals of and motivations from such artificial
systems are not one of those characteristics that lies on the surface
of such boiling complexity, but within it. I have the same
disagreement with Eliezer, about the certainty he places on the future
characteristics of AIs, given that no one here is describing the
behavior of a specific AI system, such conclusions strike me as
premature, but perhaps not unwarrented.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to