Ok, if your question was innocent, it is quite a different thing. Some
people here (read Mike Tinter) seems to be raising such question with the
intent of pushing us into accepting something we do not believe, and I
automatically assumed you were on his track.

But like I said, your question even though pretty simple in itself, calls
for a very complex answer. I am also just an AGI enthusiast, and a hobby
researcher at most, so I am not equipped with all the answers. I could
however try to give the following principles that I have thought about for
solving the problem you mention. First some basic principles:

   - What information an AGI system learns, needs to be related to what
   goal the system has in the world.
      - When a AGI system has chosen to learn a certain part of the
      world, fill out a certain blank spot, the system need to use actuators to
      arrange a situation where the part in question can be learnt.
      - When a system know which part of the world it wants to learn,
      it needs to use this when it decides what to remember.
   - Certain procedures for learning are better than others or work
   differently well in different situations, so the system needs to learn how
   to learn. I call this meta learning.
   - Meta learning could occur in any number of layers, learn how to
   learn how to learn etc. But a guess would be that 2 or 3 layers could be
   sufficient.
   - The system cannot rely completely on knowing what it needs to learn,
   so some amount of learning needs to be performed randomly, we could say out
   of pure curiosity.

Based on these premises I would guess that an AGI system would maintain a
model of the world as it currently understands it. This model is used to
create the systems actions, but also to guide the learning. The knowledge
representation needs to be able to represent missing information.

Because the model is used in creating the systems actions, it needs to be
mapped to the goals of the system. What things the system likes, and what it
dislikes. If the goal is to "possess an object is reality" such as food. The
learning algorithm would target all missing information in close association
with the object it desires, either objects in close vicinity of the desired
object, or objects that seem to have causal connection with the desired
object. The system would then make a plan of how to obtain the missing
information.

When a certain learning algorithm on the first meta level does not produce
reliable information, or produce little information. It would be replaced
with some other algorith, and a learning algorithm on the second meta level
would try to see regularities, and try to find out what properties make up a
good learning algorithm.

Maybe the meta level algorithms take note of the current situation, and
learn to apply a certain first meta level learning algorithm in situations
where they are found suitable.

Personally I believe language understanding is using a form of meta
learning. When we learn a language, we learn how to obtain knowledge given
by others, thus a form of meta learning.

All learning algorithms I speak of needs to construct things, rather than
adjusting values. An AGI learning algorithm creates concepts and
hypothesises, and then put them to the test.

But like I said, this is just a rough sketch of an imaginary "airplane". To
really know if these principles are useful, they need to be tested by bold
pioneers like Benjamin et. al. that boldly tries to go where no man has gone
before.

/Robert W


2008/1/7, David Butler <[EMAIL PROTECTED]>:
>
> Robert,
>
> Thank you for your time.  I am not a scientist nor do I have an opinion or
> agenda on weather a successful AGI can be built.  I am just really curious
> and exited about the prospects.
>
>
>
>  On Jan 7, 2008, at 12:39 PM, Robert Wensman wrote:
>
>
>
> 2008/1/7, David Butler <[EMAIL PROTECTED]>:
> >
> > How would an AGI choose which things to learn first if given enough
> > data so that it would have to make a choice?
>
>
> This is a simple question that demands a complex answer. It is like asking
> "How can a commercial airliner fly across the Atlantic?". Well, in that case
> you would have to study aerodynamics, mechanics, physics, thermodynamics,
> computer science, electronics, metallurgy and chemistry for several years,
> and in the end you would discover that one single person cannot understand
> such a complex machine in its entire detail. True enough, one person could
> understand all basic principles for such a system, but explaining them would
> hardly suffice as evidence that it would actually work in practice.
>
> If you lived in the medieval times, and someone asked you "how is it
> possible to cross the Atlantic in a flying machine carrying several hundred
> passengers?", what would you answer? Even if you had the expertise knowledge
> it would be very hard to explain thoroughly, just because the machine is so
> complex and you would have to explain every technology from the
> beginning. Where would you start? Maybe some person with less insight would
> interrupt you after a few sentences and say "well, clearly you cannot
> present evidence that it will ever work" and make fun of the idea, but how
> does insufficient time/space to explain a complex system prove that
> something is not possible?
>
> The same goes for AGI, for example when someone asks "how can we create a
> program that is creative and can choose what to learn?". In response to this
> it is possible to present a lot of different principles, such as
> adaptability, genetic programming, quelling of combinatorial explosions etc.
> But will the principles work in practice when put together? Well, at this
> stage we simply cannot tell. *So every person just has to make a choice in
> whether to believe it is possible, or whether to believe it is not possible.
> *But just because no AGI researcher can answer that question in a few
> words. "how can we create a programs that is creative and can choose what to
> learn", it doesn't mean it is not possible when all these principles come
> together. We just have to wait and see.
>
> To those who do not believe: Please just go away from this mailing list
> and do not interfere with the work here. Don't demand proof that it would
> work, because when we have such proof, i.e. a finished AGI system, we wont
> need to defend our hypothesises anyway.
>
>
> If two AGI's (again-same
> > hardware, learning programs and controlled environment) were given
> > the same data would they make different choices?
>
>
> Is a deterministic system deterministic? I do not understand what you are
> getting at. Why this question? I think Benjamin answered this question
> pretty thoroughly already.
>
> /Robert Wensman
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=82800131-bd08f3

Reply via email to