----- Original Message ----- From: Jim Bromer
Subject: [agi] Ideological Interactions Need to be Studied

An excellent thoughtful post.  Thank you!

During the past few years, I have often made critical remarks about AI theories that suggested that some basic method, and especially some rather simple objective method (like reinforcement) could be used to produce higher intelligence without a further examination and rendering of ideological complexity. By ideological complexity, I am referring to the kinds of complications that might be discovered by examining how actual ideas work and interact. When someone actively advanced a familiar theory without adding any original ideas of his own to explain how it could be made more effective than it had been I would often end up calling the stated theory "simplistic." Like the idea that reinforcement could be used to produce higher artificial intelligence. Of course reinforcement is part of learning, but I object to the argument that a program that only used a superficial reinforcement could actually work as a feasible and realizable AGI project without further advancements in other areas of the problem.

I think that there are several causes to this problem . . . .

One is elegance. It would be "oh, so nice" to find one idea that would solve the entire problem. After all, everyone knows that the single concept of "neurons" is what our brains are built upon . . . . The problem is that they then take an incredibly simplistic view of what a neuron is and then can't figure out why they can't get it to work or why they have to use radically different simplifications and formulas to make it work in different circumstances. The field of Neural Networks is rife with examples of this but unless they've gone through some serious study of neural networks, people are very likely to just casually sling out the argument "Neural networks can do this . . . . " without realizing that it is the equivalent of saying "Animals can do this" when talking about anything from naturally breathing water to running thirty-five mph to flying to landing on the moon. (If you believe that this is an overstatement, do the honest work of going through the two Parallel Distributed Processing Processing volumes *and* the spiral-bound workbook by McClelland and Rumelhart and see if you have the same opinion).

Another is paucity of tools. When all you've got is a hammer, everything looks like a nail. You *can* drive a screw with gentle taps of a hammer but when you add a screwdriver, the productivity increase is amazing and all of a sudden you've got an entre into *new* concepts like nuts and bolts . . . . This is my frustration with the non-systems types on this mailing list. Too many times I've heard the chorus, "Yes, X may be better but Y is good enough". To me, that's like saying that a hammer is good enough to drive screws. I see all the chatter about needing a scripting language and just shake my head since subsets of both C# and F# make *excellent* scripting languages and already come with closure (like LISP, Scheme, etc.), functional programming, and immediate compilation already baked right in. Instead, they're going down the route of multiple, separate languages being *REQUIRED* and complain about chaos when separate languages are *possible* at the developer level (where the differences are really just minor variations in syntax) but the underlying structure and capabilities of each syntax are basically identical.

But, although I have frequently criticized the propagation of simplistic AI paradigms, at the same time I definitely believe that simple reasoning is the method by which thinking is advanced. On the surface this might look like a major contradiction. Of course most wiser readers understand that there is a difference between my use of the phrases "simplistic AI paradigms," and "simple reasoning," but even so there is a odd discordance between these two prongs of my current thesis.

The explanation is that I believe that a general AI must be able to integrate complicated ideas. This can even be done through the application of simple reasoning in a gradual learning process, but it would require extensive programming beyond that which is usually suggested in reference to methods like reinforcement or other overly simplistic AI paradigms that have been around for years.

So I believe that we learn gradually through the use of simple reasoning. But because the mind is able to apply the benefit of that simple reasoning to a more complex base of previously learned knowledge and mental processes, we are able at times to make apparent leaps in comprehension. So even though the paradigm of reinforcement may look similar to the paradigm of gradual learning, the real difference in the theories lay in the mysteries of the ways ideas and concepts interact. And these interactions are complicated.

I agree. I believe that we will need a relatively small number of reasonably simple paradigms -- but that we need to choose them *very* well and more -- we need to do some serious study as to how to combine them . . . .

I believe that these mysteries of conceptual complexity (or ideological interactions) can be discovered through discussion and experiment so long as that effort is not thwarted by the expression of immature negative emotions and abusive anti-intellectual rants. While some of us who are trying to create constructive dialogues can be annoying and confrontational at times, those who are characteristically angry and hostile toward us are unlikely to make significant advancements in the more subtle studies that are required without first acquiring greater insight into what is really driving their emotional reactions. And those who are ideologically opposed to the study of ideas, by their own ideological biases, are also unlikely to participate constructively in such discussions.

:-) Hopefully I fall under "annoying and confrontational at times" and not "ideologically opposed to the study of ideas". :-)

I believe that an efficacious program can be constructed with the remains of various AI paradigms of the past and I appreciate those who worked hard to develop those actual AI programs. I just do not believe that an effective advancement will be found by the tedious repetition of shallow arguments that have been tried but failed to produce advances of some substance. I feel that it is time to examine how ideas interact using simple theories but this kind of effort will only make sense by the recognition that simple theories must be combined and integrated with previously acquired knowledge through some complicated processes of intelligence which are not yet widely appreciated.

AMEN!  So where would you start?




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to