yky,

>
my project is still in its infancy.
>Right now we're focusing on vision, which turns out to be extremely hard.

Looks like the complexity of the vision problem killed many other projects. So why not to start with a text-oriented AGI? I know you may then hit the knowledge grounding problem a bit harder but that's IMO still a bit easier to knock down than the whole vision problem, isn't it?

My old post from SciForums:
"One of the reasons why I chose text-oriented AI at this point is the simple way of introducing new concepts to the AI. It can make valid thoughts about new concepts even when having very limited knowledge about it because that little piece of info might be sufficient for a number of particular thoughts. In 3D, new concepts = lots of work which users likely cannot do and lots of details which may not necessarily be important. Introduction of new concepts is something you really want to keep simple because there is a huge number of concepts to learn in order to understand our world. You do not want to have the dev team involved whenever there is something new for the AI to learn (like new types of objects and their relationship with the other objects and world events) + when you simulate the nD world then you can easily hit limits of your hardware."

I think it would be easier to first develop text-oriented AGI running on a server. When we see that the core AGI algorithms work as intended, it would then IMO make more sense to play with harder-to-perceive environments as VR or various types of video (and other) inputs.

BTW AGI is just one of my medium-priority hobbies. I don't consider myself to be a real AGI expert. I'm an experienced software developer and DB designer though.

>I think there should be some sort of built-in AGI mechanisms that prevents it from doing harmful things

IRL, decision makers often face
harmful choices only . Ask politicians ;-).. I think what you need is a hierarchy of AGI users with well defined limitations but there IMO should be at least one group of users who (possibly collectively) can authorize the system to take virtually any particular action. At least in this century, people are IMO the ones who should keep the ultimate control, not the AGI. If we hit the point when we totally cannot comprehend the AGI-suggested actions, we should probably put some thoughts on self-improvements and/or improvements of tools we are using to understand complex concepts.

Regards,
Jiri Jelinek

PS: Happy holidays!!


On 12/21/05, Yan King Yin < [EMAIL PROTECTED]> wrote:

Mark:
MY LITTLE AGI PROJECT

Since I started my studies I was interested in AI and creating AGI, thus I tried to learn as much as possible about various AI disciplines, to unify them later. In my spare time, I am working on a Production Rule System with reasoning abilities, which I plan to program/teach for the usage of various AI techniques (such as EA, RL, classification, unsupervised learning...), to evolve and develop the rules and facts in the system. I am interested about your opinion about a system like this.
 
Your system sounds interesting, although it's not an entire AGI framework.  You're on the right track trying to unify various AI approaches (such as planning, reasoning, perception, etc).  I have some basic ideas of how to build an AGI, but my project is still in its infancy.  I'd welcome other researchers to join my open source project.
 
My AGI theory is based on the compression of sensory experience, and the basic operation is pattern recognition.  Traditional production rule systems may be a bit too limited because they cannot perform probabilistic inference, or statistical pattern recognition.
 
Right now we're focusing on vision, which turns out to be extremely hard.
 
Re your analysis of AGI social issues:  I think there should be some sort of built-in AGI mechanisms that prevents it from doing harmful things, although the exact form of it is still unclear to me.  The folks at SIAI have thought about this issue much more intensely, but I think their vision is a bit over the top.
 
Secondly I agree that AGI may create more social inequality between those who knows how to exploit AGI and those who are left behind.  I'm afraid this is also inevitable.  The best we can do is to try to ameliorate such effects.  The good side to it is that AGI will be very easy to use because it can understand human language.
 
Cheers,
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to