William wrote:

 
I suspect that it will be quite important in competition between agents. If one agent has a constant method of learning it will be more easily predicted by an agent that can figure out its constant method (if it is simple). If it changes  (and changes how it changes), then
it will be less predictable and may avoid other agents exploiting it.
 
Well, I'm only interested in building an intelligent agent that can maintain knowledge and answer queries, for possible application to scientific and medical research.  I'm not interested in building AIs that compete with each other, especially not in military ways.  Others may build those things, but that's not my purpose.

 
They can also be what I think of as soft wired. Programmed but also allowed to be altered by other parts of the system.
 
"Soft wiring" is a good concept, but I believe that
mechanisms of inference may be totally fixed for all pratical purposes, and we'll let later generations deal with the extra subtleties.

 
If you include such things as reward in labelling, and self-labeling, then I agree. I would like to call the feeling where I don't want to go to bed because of too much coffee 'the jitterings', and I be able to learn that.
 
In the most straightforward analysis, you cannot have an AGI labeling things all by itself.  Somehow, a teacher must label the concepts even though the nameless concepts may have emerged automatically.  That's the bottomline.  How we can do better than that?  If your AGI calls coffee XYZ, and you don't know what XYZ refers to, then you basically have a "Rosetta stone" kind of problem.  Translating between 2 languages requires AGI, which begs the question.

 
But the sub parts of the brain aren't intelligent surely? Only the whole is? You didn't define intelligence so I can't determine where our disconnect lies.
 
You said the visual cortex can rewire to auditory parts in an unsupervised manner.  My question is how do you make use of this trick to build an AGI from scratch, without supervised learning?

 
I work with a strange sort of reinforcement learning of sorts as a base. You can layer whatever sort of learning you want on top of it, I would probably layer supervised learning on it if the system was going to be social. Which would probably be needed for something to be
considered AGI.
 
Reinforcement learning may be good for procedural learning, but in my approach I focus only on knowledge maintenance.  I guess reinforcement learning is not an efficient way to deal with knowledge.

 
I am not saying we should imitate a flatworm then a mouse then a bird etc. I am saying that we should look at the problem classes solved by evolution at first, and then see how we would solve them with silicon.
This would hopefully keep us on the straight and narrow and not diverge into a little intellectual cul-de-sac
 
I agree.  And evolutionary algorithms should not be used to evolve an AGI from scratch because it is too slow and because we already know the mechanisms (no need to reinvent the wheel).
 
Saying that because the brain uses neurons to classify things, those  methods of classification are fixed, is like saying because a Pentiums uses transistors to compute things and they are fixed, what a pentium can compute is fixed.
Also if all neurons do is feature extraction/classification etc how can we as humans reason and cogitate?
 
I think the mechanisms of thinking in the brain are not that hard to understand.  We don't know the exact details but we have some very basic understanding of it.

 
Induction, deduction we know. However there are many things we don't know. For example getting information from other humans is an important part of reasoning. Which humans we should trust, who may be
out to fool us, we don't.
 
That's pattern recognition.  We know how to program it.  I don't think there are "many things we dont know".  I think we already know enough to build a practical functional system.  =)

 
Another thing we can't specify completely in advance is the frame problem. Or how to deal with faulty input (if we have a electrical storm that interferes with our AGI, what how would it now the inputs were faulty?).
 
The frame problem was there because of small knowledge bases and the inefficiency of inference.  It doesn't mean that we need "new" ways of inference.

 
One last thing we don't know how to deal with is the forgetting problem. What data should we forget? How do we determine which is the least important?
 
"Use it or lose it" may be the mechanism.  We will program that.

 
This is definately true.  But how do you convince other people to come to your flag. Argumentation, as I have discovered many times doesn't help :) The only way to get any sort of consensus is to make it as much like science as possible and make the impartial world the judge of the fitness of your ideas,
 
I have written some AGI theory on my page:
http://www.geocities.com/GenericAI
which I'm trying to get Ben to incorporate in his design.
 
The design is not yet complete but I believe we're on the way to creating the first functional AGI.
 
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to