Dear yky and Jiri Jelinek,

AGI PROJECT

under unsupervised learning I meant compression of facts, rules, inputs. The similarities derived form the compression can be used as instinctual associations between different "ideas", and also to reduce the size of the KB. The basic operation in my system is also pattern recognition, and I am planning to implement reasoning (different logic systems, beginning with 1st order) over the pattern matching layer. (The PRS supports forward and backward chaining.)

I am not familiar with statistical pattern recognition. Does this mean, that each pattern (or condition) has a certain probability based on the probability of the sub patterns in it? If so, that sounds useful. I am palling to implement features like this also on the top of the PRS.

I am using PRS (well, with chaining a bit more powerful framework), since it seems to be flexible enough for a self-modifying AI, and also the RETE algorithm ensures the efficient executions of rules of which the condition might have came true.

I think our projects are pretty similar (the basic difference is - which I suppose based on your questions -, that you might have based the system on probability, and build pattern matching over that?, while I am doing the opposite). I am interested in your project...

SAFE AGI

I agree Jiri Jelinek, I think also, that easier problems should be targeted for first (e.g. reasoning), and then the computationally heavy problems. (And also knowledge grounding is not a problem until one has any sensors, and actuators being textual, visual or any type)

What do you mean under, SIAI visions over the top? For me it seems to be rational. If we suppose the AGI is a rational being, then the goal: "Do what you think that I want you to do" will control the self improvement also in the right direction. Or am I wrong?

From this point of view also the hierarchical system seems to be unnecessary (although it might be good for the sake of security). And I think if we cant make a single AGI safe, then the hierarchy is also dangerous. The hierarchy might have bigger security form statistical point of view, but if the AGIs in it are more clever than us or each other, that might mean some problem. Is it possible to create a not selfish institute from selfish individuals? (I think they will start to cooperate for more profit)

Best wishes,
Márk



On 12/21/05, Yan King Yin <[EMAIL PROTECTED] > wrote:

Mark:
MY LITTLE AGI PROJECT

Since I started my studies I was interested in AI and creating AGI, thus I tried to learn as much as possible about various AI disciplines, to unify them later. In my spare time, I am working on a Production Rule System with reasoning abilities, which I plan to program/teach for the usage of various AI techniques (such as EA, RL, classification, unsupervised learning...), to evolve and develop the rules and facts in the system. I am interested about your opinion about a system like this.
 
Your system sounds interesting, although it's not an entire AGI framework.  You're on the right track trying to unify various AI approaches (such as planning, reasoning, perception, etc).  I have some basic ideas of how to build an AGI, but my project is still in its infancy.  I'd welcome other researchers to join my open source project.
 
My AGI theory is based on the compression of sensory experience, and the basic operation is pattern recognition.  Traditional production rule systems may be a bit too limited because they cannot perform probabilistic inference, or statistical pattern recognition.
 
Right now we're focusing on vision, which turns out to be extremely hard.
 
Re your analysis of AGI social issues:  I think there should be some sort of built-in AGI mechanisms that prevents it from doing harmful things, although the exact form of it is still unclear to me.  The folks at SIAI have thought about this issue much more intensely, but I think their vision is a bit over the top.
 
Secondly I agree that AGI may create more social inequality between those who knows how to exploit AGI and those who are left behind.  I'm afraid this is also inevitable.  The best we can do is to try to ameliorate such effects.  The good side to it is that AGI will be very easy to use because it can understand human language.
 
Cheers,
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to