On Fri, 20 Oct 2006 22:15:37 -0400, Richard Loosemore wrote
> Matt Mahoney wrote:
> > From: Pei Wang <[EMAIL PROTECTED]>
> >> On 10/20/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > 
> >>> It is not that we can't come up with the right algorithms.  
>  >>> It's that we don't have the
> >>> computing power to implement them.
> > 
> >> Can you give us an example? I hope you don't mean algorithms like
> >> exhaustive search.
> > 
> > For example, neural networks which perform rudamentary pattern 
> > detection and control for vision, speech, language, robotics etc.  
> > Most of the theory had been worked out by the 1980's, but 
> > applications have been limited by CPU speed, memory, and training 
> > data.  The basic building blocks were worked out much earlier.  
> > There are only two types of learning in animals, classical 
> > (association) and operant (reinforcement) conditioning.  
> > Hebb's rule for classicical condioning proposed in 1949 is 
> > the basis for most neural network learning algorithms today.  
> > Models of operant conditioning date back to W. Ross Ashby's 
> > 1960 "Design for a Brain" where he used randomized weight 
> > adjustments to stabilize a 4 neuron system build from vacuum 
> > tubes and mechanical components.
>  >
> > Neural algorithms are not intractable.  They run in polynomial time.  
> > Neural networks can recognize arbitrarily complex patterns by adding 
> > more layers and training them one at a time.  This parallels the 
> > way people learn complex behavior.  We learn simple patterns first, 
> > then build on them.
> 
> I initially wrote a few sentences saying what was wrong with the 
> above, but I chopped it.  There is just no point.
> 
> What you said above is just flat-out wrong from beginning to end.  I 
> have done research in that field, and taught postgraduate courses in 
> it, and what you are saying is completely divorced from reality.
> 
> Richard Loosemore

I have simply taken maybe one and say a half (because it seems like every ai
survey class has to touch upon neural nets again) graduate classes on the
subject, and not taught or done research in the area, but I recognized that
most of that was wrong.  I at least hold out the possibility that neural nets
can be made useful with some greater theory about architectures and much
greater computing power.  I think it would be worthwhile for you to take the
time to list what you think the flaws were, if only to open the possibility
for some positive recomendations for research directions.  Even thought you
may be completely disillusioned, maybe not everyone is.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to