In response to Jim Bromer's post of Wed 1/7/2009 8:24 PM

 

=========Jim Bromer==========>

All of the major AI paradigms, including those that are capable of learning,
are flat according to my definition.  What makes them flat is that the
method of decision making is minimally-structured and they funnel all
reasoning through a single narrowly focused process that smushes different
inputs to produce output that can appear reasonable in some cases but is
really flat and lacks any structure for complex reasoning.

 

====Ed Porter====>

This is certainly not true of a Novamente-type system, at least as I
conceive of it being built on the type of massively parallel, highly
interconnected hardware that will be available to AI within 3-7 years.  Such
a system would be hierarchical in both the compositional and
generalizational dimensions, and the computation would be taking place by
importance weighted probabilisitic spreading activation, constraint
relaxation, and k-winner take all competition across multiple layers of
these hierarchies, so the decision making would not "funnel all reasoning
through a single narrowly focused process" any more that human though
processes do.

 

If a decision is to be made, it makes computational sense to have some
selection process that focuses attention on a selected one of multiple
possible candidate actions or 

though.  If that is the type of "funneling" that you object to, you are
largely objecting to decision making itself.

 

=========Jim Bromer==========>

so along came neural networks and although the decision making is
superficially distributed and can be thought of as being comprised of a
structure of layer-like stages in some variations, the methodology of the
system is really just as flat.  Again anything can be dumped into the neural
network and a single decision making process works on the input through a
minimally-structured reasoning system and output is produced regardless of
the lack of appropriate relative structure in it.  In fact, this lack of
discernment was seen as a major breakthrough! Surprise, neural networks did
not work just like the mind works in spite of the years and years of
hype-work that went into repeating this slogan in the 1980's.

 

====Ed Porter====>

It depends what you mean by neural nets.  If you mean the typical three
layered backprop net that started gaining attention in the mid-80, you are
dealing with a distributed system, but a very limited one.  

 

But if by a neural net, you mean the types of nets that are simulating
substantial portions of mammalian brains, such as is being done by IBM's
Dharmendra Modha , or by The Blue Brain Project, by Ecole Polytechnique
Federale de Lausanne and IBM, I think you would find what is going on is not
funneling "all reasoning through a single narrowly focused process"
significantly more than do mammalian brains of the size being simulated.

 

=========Jim Bromer==========>

Finally we reach the next century to find that the future of AI has already
arrived and that future is probabilistic reasoning!  ..  It uses a funnel
minimally-structured method of reasoning whereby any input can be smushed
together with other disparate input to produce a conclusion which is only
limited by the human beings who strive to program it!

 

====Ed Porter====>

Probabilistic reasoning can be used in many different ways, and it is used
in both of the types of systems I have described above it is not guilty of
the alleged "funneling", except in the types of computational processes
where any intelligence trying to accomplish the same goal would tend to
similarly funnel.  

 

=========Jim Bromer==========>

The very allure of minimally-structured reasoning is that it works even in
some cases where it shouldn't.  It's the hip hooray and bally hoo of the
smushababies of Flatway.

 

====Ed Porter====>

In summary, I think your criticism has some validity as applied to a lot of
traditional approaches to AI, and is somewhat applicable to most current AI
projects because they are so extremely limited by hardware that they cannot
afford the complexity which they would require to work properly. 

 

But with the type of hardware could be built in 3-7 years, with hundreds of
thousand or millions of cores, through-silicon vias to provide fast
processor to memory bandwidth, multi-layer wafer scale integration, and
photolithographically created photonics, it will be possible to get hardware
at prices that many AI researchers can afford ($50K to $500K) that will be
faster than the world's current fastest super computers for important AI
tasks like massively parallel spreading activation, and dynamic attention
focusing within such activation. 

 

In such systems reasoning need not be any more "smushed" than it is in
smaller mammalian brains.  And in the larger systems made from such hardware
that will be required for human level AGI, the computation need be no more
"smushed" than it is in the human brain.

 

So I think your allegations of "smushing" are over generalized, and that to
the extent they have any validity, the hardware and the software approaches
to AGI that will start dominating within 3 to 10 years will have made their
relevance largely historical.

 


Ed Porter


 

 

-----Original Message-----
From: Jim Bromer [mailto:[email protected]] 
Sent: Wednesday, January 07, 2009 8:24 PM
To: [email protected]
Subject: [agi] The Smushaby of Flatway.

 

All of the major AI paradigms, including those that are capable of

learning, are flat according to my definition.  What makes them flat

is that the method of decision making is minimally-structured and they

funnel all reasoning through a single narrowly focused process that

smushes different inputs to produce output that can appear reasonable

in some cases but is really flat and lacks any structure for complex

reasoning.

 

The classic example is of course logic.  Every proposition can be

described as being either True or False and any collection of

propositions can be used in the derivation of a conclusion regardless

of whether the input propositions had any significant relational

structure that would actually have made it reasonable to draw the

definitive conclusion that was drawn from them.

 

But logic didn't do the trick, so along came neural networks and

although the decision making is superficially distributed and can be

thought of as being comprised of a structure of layer-like stages in

some variations, the methodology of the system is really just as flat.

 Again anything can be dumped into the neural network and a single

decision making process works on the input through a

minimally-structured reasoning system and output is produced

regardless of the lack of appropriate relative structure in it.  In

fact, this lack of discernment was seen as a major breakthrough!

Surprise, neural networks did not work just like the mind works in

spite of the years and years of hype-work that went into repeating

this slogan in the 1980's.

 

Then came Genetic Algorithms and finally we had a system that could

truly learn to improve on its previous learning and how did it do

this?  It used another flat reasoning method whereby combinations of

data components were processed according to one simple untiring method

that was used over and over again regardless of any potential to see

input as being structured in more ways than one.  Is anyone else

starting to discern a pattern here?

 

Finally we reach the next century to find that the future of AI has

already arrived and that future is probabilistic reasoning!  And how

is probabilistic reasoning different?  Well, it can solve problems

that logic, neural networks, genetic algorithms couldn't!  And how

does probabilistic reasoning do this?  It uses a funnel

minimally-structured method of reasoning whereby any input can be

smushed together with other disparate input to produce a conclusion

which is only limited by the human beings who strive to program it!

 

The very allure of minimally-structured reasoning is that it works

even in some cases where it shouldn't.  It's the hip hooray and bally

hoo of the smushababies of Flatway.

 

Jim Bromer

 

 

-------------------------------------------

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to