----- Original Message ----- From: "rooftop8000" <[EMAIL PROTECTED]> To: <[email protected]> Sent: Friday, March 23, 2007 1:48 PM Subject: Re: [agi] My proposal for an AGI agenda
> Suppose there was an AGI framework that everyone could add > their ideas to.. What properties should it have? I listed > some points below. What would it take for > you to use the framework? You can add points if you like. > > > 1. collaboration. is it possible to focus all attention/ work > in one big project? will it be too complex and unmanageable? I think that specific parts can be designated to multiple developers in a coherent manner without requiring full time effort by everyone. An adhoc or "anything goes" approach might be interesting but without direction it would never get anywhere. > 2. supported computer languages? many or one? Multi language design won't work IMO. No one could just pop a whole set of routines into someone else's computer and watch the result. Different language modules couldn't just fit together as needed. Larger existing AGI projects could communicate in the future by using sockets like Novamente, A2I2 or others but this communication with proprietary systems (or other language designs) wouldn't be the same as people working in the same environment. > 3. organization? > -try to find a small set of algorithms (seed AI)? > -allow everything that is remotely useful? If you think you can breed an intelligence from a "small set of algorithms" then why not just make it and see if you can? (evolutionary algorithms wouldn't have to be excluded from the research) Limiting what people could work on isn't a good idea but some lines of research have been tried and found wanting. People could be more useful by working in areas that most others in the group believed to be most promising. Unless someone is getting paid, however, it is difficult to force them to NOT work in an area they have interest in. Not all code would have to be included in the AGI design just because it was made. > 4. what is the easiest way for people to contribute? > how could existing algorithms be added easily? > -neural networks > -logic > -existing knowledge bases I don't think the number of algorithms matters as much as getting some promising results quickly. Even some small promising results! > 5. what kind of problems should be the focus first? > -visual/real world robotics? > -abstract problems? Is a blind person still intelligent? Can someone still be intelligent if they can't solve abstract problems? The better question is: can a person teach someone anything that doesn't understand your language? I think to build models that combine language/context and information would be a good place to start. > 6. self-modification? > -don't waste time on it, it will never give good results? > -all the parts should be able to be modified easily? If no self-modification then you have to build all the intelligence into the data or people have to program the entire AGI by hand. Does either of these consequences appeal to you? If the parts aren't easily modified and you don't know exactly what might work, then the project doesn't have much chance. > 7.organization in modules? > -what is the best granularity? > -how to generalize from them (in stead > of just getting the sum of all the algorithms)? I suggest that models produce a set of coded patterns. These patterns could be accessed to get lists of models that produce similar patterns in other domains. Testing could be done using algorithms from the initial domain to help solve the problem in the new domain. I think generalization should occur at all levels, all the time. > 8. cpu power > -only allow very fast, optimized algorithms > -allow anything Most importantly, it has to work, however, is "Allow anything" ok with you? > 9. the set of properties required > -too large to do by hand? > -try to let properties emerge? Why not both? Let the system evolve into what properties make sense. > 10. KBs. how to use them? how to reuse existing ones I don't think existing KBs would be very useful because I think that language/knowledge and context must be coded or learned together. What weight would you automatically assign to this information that you haven't created and don't know the validity or completeness of? > 11. embodiment? important or not? If a person is paralyzed and can only type on a computer, are they still intelligent? If a knowledgeable person is helping you out on the internet using instant messenger (no vision), are they still intelligent or useful? Embodiment doesn't seem to be a requirement for intelligence in people. > 12. representations? > -try to find a small set > -allow everything Why not try some small set and then throw out what you don't like and add new if you think it necessary? > 13. learning. Can a system have too many ways of learning? So long as they work, why isn't the more the merrier? > 14. how to make it scale well Can many people work on different parts of the project on their computers at the same time? Can synchronization of the many computers be done automatically? > 15. central structure ? > distributed control? Who on this list would abdicate their autonomy to one centralized authority? If not distributed then not at all. -- David Clark ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
