Matt Mahoney wrote:
From: Pei Wang <[EMAIL PROTECTED]>
On 10/20/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

It is not that we can't come up with the right algorithms.
>>> It's that we don't have the
computing power to implement them.

Can you give us an example? I hope you don't mean algorithms like
exhaustive search.

For example, neural networks which perform rudamentary pattern detection and control for vision, speech, language, robotics etc. Most of the theory had been worked out by the 1980's, but applications have been limited by CPU speed, memory, and training data. The basic building blocks were worked out much earlier. There are only two types of learning in animals, classical (association) and operant (reinforcement) conditioning. Hebb's rule for classicical condioning proposed in 1949 is the basis for most neural network learning algorithms today. Models of operant conditioning date back to W. Ross Ashby's 1960 "Design for a Brain" where he used randomized weight adjustments to stabilize a 4 neuron system build from vacuum tubes and mechanical components.
>
Neural algorithms are not intractable. They run in polynomial time. Neural networks can recognize arbitrarily complex patterns by adding more layers and training them one at a time. This parallels the way people learn complex behavior. We learn simple patterns first, then build on them.


I initially wrote a few sentences saying what was wrong with the above, but I chopped it. There is just no point.

What you said above is just flat-out wrong from beginning to end. I have done research in that field, and taught postgraduate courses in it, and what you are saying is completely divorced from reality.





Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to