> > Perhaps higher intelligence can be explained by the creation of > specialized algorithms that can react to learned kinds of information. (I > don't know what this theory is called)
http://en.wikipedia.org/wiki/Shaping_(psychology) During autoshaping, food comes irrespective of the behavior of the animal. > If reinforcement were occurring, random behaviors should increase in > frequency because they should have been rewarded by random food. > Nonetheless, key-pecking reliably develops in > pigeons,[6]<http://en.wikipedia.org/wiki/Shaping_(psychology)#cite_note-6> > even > if this behavior had never been rewarded. I think the theory you mention has some merit. It appears each species has some built-in behavioral learning biases that permit its members to evaluate the appropriateness of certain standard forms of behavior (or slight variations thereof) under certain standard conditions, but which do not generalize to other behaviors or conditions. For example, I doubt you could teach a dog to bark precisely when there's *not *a cat (a conditional restriction) or to howl in a rising pitch (a behavioral restriction). Certain broad classes of behavior are almost always useless or a bad idea, and so evolution has captured that information by not implementing them as capabilities. To include them in the genetic plan would result in the organism wasting a lot of time rediscovering what's not useful, instead of focusing on better opportunities. It's better to have a general idea of what the right answer is before you start learning. On Fri, Feb 8, 2013 at 1:52 PM, Jim Bromer <[email protected]> wrote: > As I said, this idea came to me because I was thinking of a concise > generator to produce an immensity of different variations of a typological > character. I really haven't figured out how difficult it would be to > create an algorithm generator, I have only been able to consider very > simple models. > > One current theory in AI/AGI is that animals have many different (simple > algorithm-like) ways to react to a variety of narrow kinds of information. > For instance you will have an automatic reaction if something moves > unexpectedly from behind you if it is extremely close to your head (close > to one of your eyes). It is believed that there may be a fairly primitive > mental 'mechanism' to react to that sort of thing. Perhaps higher > intelligence can be explained by the creation of specialized > algorithms that can react to learned kinds of information. (I don't know > what this theory is called). I am wondering if a system of algorithm > generators might be used to develop individual algorithms as needed. If > feasible, this might be an economical way to create specialized algorithms > as needed without having to supply an overly extensive set of simple > algorithms that might not be used. > > Also, since the parameters to the algorithm generators would be the method > by which different algorithms were generated (when the program thought that > it should try them) then it might be possible to for the program to make > educated guesses about the kind of algorithm that might be needed for a > special class of problems based on characteristics of the problem that can > be related to the parameters of previously generated algorithms. This is a > nice theory except that there is of course a complication. Effects might > not show up until you require a more sophisticated (or a less > sophisticated) level for an algorithm (for the generator parameters). All I > can think of is a simplistic example. If you are mapping a curved line > vector onto a low resolution image buffer (like a display buffer), certain > details of the vector might not show up. There is no reason that a > computer program would guess that the detail is being lost because the > image buffer is too low resolution. On the other hand, if the program > noticed in one case that a detail all of a sudden became apparent when the > image buffer was set to a higher resolution, then it could learn > that higher resolution could bring out more detail on certain kinds of > image objects. One possible advantage to this is that by using the > generator theory this sort of thing might be better systematized and hence > be more sensitive to formal techniques. > Jim Bromer > > On Fri, Feb 8, 2013 at 11:56 AM, Piaget Modeler <[email protected] > > wrote: > >> What kinds of data objects are analyzed and what does analysis imply? >> >> If the algorithms are not solutions to gaps or impediments then what >> purpose do they serve? >> >> Kindly explain. >> >> ~PM >> >> ------------------------------ >> Date: Fri, 8 Feb 2013 11:33:10 -0500 >> Subject: Re: [agi] Could Algorithm Generators be a Feasible and Effective >> AGI Method? >> From: [email protected] >> To: [email protected] >> >> >> On Fri, Feb 8, 2013 at 10:38 AM, Ben Goertzel <[email protected]> wrote: >> >> "Automated program learning" is a branch of AI that seems close to what >> you have in mind... >> This is what the MOSES component of OpenCog attempts to do, though it's >> currently only really effective at learning simple sorts of programs... >> -- Ben G >> >> >> I described a very narrow type of programming object. An Algorithm >> Generator. Because I was able to use such colloquial terms it is something >> that almost everyone involved with programming should be able >> to understand. Any AGI program would have to be capable of doing some >> automated program learning. The question I was trying to explore was >> whether or not an explicit system of algorithm generators would be useful >> and how they might be used. One might argue that any AGI program that was >> able to learn would effectively be creating (or generating) algorithms. >> What I am talking about is the question of designing mechanisms that >> explicitly generate kinds of algorithms. The algorithms that I have in mind >> are not solution algorithms (per se) but analytical algorithms (including >> algorithms that analyze data objects by making modifications). >> Jim Bromer >> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
