On 9/20/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> William wrote:
>
>  I suspect that it will be quite important in competition between agents. If
>
> > one agent has a constant method of learning it will be more easily
> predicted
> > by an agent that can figure out its constant method (if it is simple). If
> it
> > changes (and changes how it changes), then
> > it will be less predictable and may avoid other agents exploiting it.
>
>  Well, I'm only interested in building an intelligent agent that can
> maintain knowledge and answer queries, for possible application to
> scientific and medical research. I'm not interested in building AIs that
> compete with each other, especially not in military ways. Others may build
> those things, but that's not my purpose.

However science is also a form of competition  between agents (humans
being a type of agent), the winner being the most cited.

Let us say that your type of Intelligence becomes prevalent, it would
become very easy to predict what this type of intelligence would find
interesting (just feed it all the research that is commonly fed it,
and then test it). People would then tailor there own research to be
interesting to this type of system (regardless of whether it was
innovative or ground breaking). It would stultify research.

>  They can also be what I think of as soft wired. Programmed but also allowed
>
> > to be altered by other parts of the system.
>
>  "Soft wiring" is a good concept, but I believe that
> mechanisms of inference may be totally fixed for all pratical purposes, and
>
> we'll let later generations deal with the extra subtleties.

You can do what you wish. I'm going to study softwiring now.

>  If you include such things as reward in labelling, and self-labeling, then
>
> > I agree. I would like to call the feeling where I don't want to go to bed
>
> > because of too much coffee 'the jitterings', and I be able to learn that.
>
>  In the most straightforward analysis, you cannot have an AGI labeling
> things all by itself. Somehow, a teacher must label the concepts even though
>
> the nameless concepts may have emerged automatically. That's the bottomline.
>
> How we can do better than that? If your AGI calls coffee XYZ, and you don't
>
> know what XYZ refers to, then you basically have a "Rosetta stone" kind of
> problem. Translating between 2 languages requires AGI, which begs the
> question.

This is only the case if we have words that are the same as the
concept that has emerged. In science there is a large amount of
creation of new concepts. What happens if in studying astonomic data
the system comes across a new type of star that varies its colour
slightly, the AGI decides to call it a chromar. The sort of system you
are describing doesn't seem able to do this.

I can see that a lot of learning will be supervised. But other types
will have to be unsupervised if we want it to discover new things.

>  But the sub parts of the brain aren't intelligent surely? Only the whole
> > is? You didn't define intelligence so I can't determine where our
> disconnect
> > lies.
>
>  You said the visual cortex can rewire to auditory parts in an unsupervised
>
> manner. My question is how do you make use of this trick to build an AGI
> from scratch, without supervised learning?

My goal has never been to dismis supervised learning from AI designs,
rather it should be softwired in, and there should be unsupervised
methods of learning that act on it.

>  I work with a strange sort of reinforcement learning of sorts as a base.
> > You can layer whatever sort of learning you want on top of it, I would
> > probably layer supervised learning on it if the system was going to be
> > social. Which would probably be needed for something to be
> > considered AGI.
>
>  Reinforcement learning may be good for procedural learning, but in my
> approach I focus only on knowledge maintenance. I guess reinforcement
> learning is not an efficient way to deal with knowledge.

Not directly no. But then I am suggesting layered approach with
supervised learning to do most of the knowledge maintenance. I am also
interested in procedural learning, hence the difference in emphasis.

> > Saying that because the brain uses neurons to classify things, those
> > methods of classification are fixed, is like saying because a Pentiums
> uses
> > transistors to compute things and they are fixed, what a pentium can
> compute
> > is fixed.
> >
> Also if all neurons do is feature extraction/classification etc how can we
> > as humans reason and cogitate?
>
>  I think the mechanisms of thinking in the brain are not that hard to
> understand. We don't know the exact details but we have some very basic
> understanding of it.
>
>  Induction, deduction we know. However there are many things we don't know.
>
> > For example getting information from other humans is an important part of
>
> > reasoning. Which humans we should trust, who may be
> > out to fool us, we don't.
>
>  That's pattern recognition.

It is more than pattern recognition, because we also take into
consideration information from other people into who to trust. For
example if Bob, someone you trust says, "Trust Mary", you will
probably put greater store by what Mary tells you. Or in a scientific
setting, an Author that you trust citing another author that is
unknown will raise the unknown author in your opinion.

The opposite command "Don't trust Mary" is even more complex if you
already trust Mary. How do you determine whether to trust mary or not?

A naive pattern recognition approach is liable to exploitation of the
type I suggested in the no free lunch area.

> We know how to program it. I don't think there
>
> are "many things we dont know". I think we already know enough to build a
> practical functional system. =)

>  Another thing we can't specify completely in advance is the frame problem.
>
> > Or how to deal with faulty input (if we have a electrical storm that
> > interferes with our AGI, what how would it now the inputs were faulty?).
>
>  The frame problem was there because of small knowledge bases and the
> inefficiency of inference. It doesn't mean that we need "new" ways of
> inference.

The methods of inference are fine as long as everything is translated
properly to the method of inference you are using. It is this
translation that is need of always being changed and updated. I tend
to look at it all as a package that needs changing because inference
is so dependent upon having the correct input data in the correct
representation.

>  One last thing we don't know how to deal with is the forgetting problem.
> > What data should we forget? How do we determine which is the least
> > important?
>
>  "Use it or lose it" may be the mechanism. We will program that.

Why won't this make our systems likely to forget things like eclipses
and infrequent comets?

I do agree with use it or lose it just not on the scale of individual
data, but on the scale of competencies. How each competency deals with
its own data is up to it though...

  Will Pearson

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to