Re: [agi] AGI introduction

2007-06-26 Thread YKY (Yan King Yin)

Hi Pei,

I'm giving a presentation to CityU of Hong Kong new week, on AGI in general
and about my project.  Can I use your listing of representative AGIs in
one slide?

Also, if I spend 1 slide to talk about NARS, what phrases would you
recommand? ;)

Thanks a lot!
YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] AGI introduction

2007-06-26 Thread Pei Wang

On 6/26/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


Hi Pei,

I'm giving a presentation to CityU of Hong Kong new week, on AGI in general
and about my project.  Can I use your listing of representative AGIs in
one slide?


Sure --- it is already in public domain.


Also, if I spend 1 slide to talk about NARS, what phrases would you
recommand? ;)


The first two sentences under NARS in the list.

Pei


Thanks a lot!
YKY 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-24 Thread Eliezer S. Yudkowsky

Pei Wang wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


This looks pretty good to me.  My compliments.

(And now the inevitable however...)

However, the distinction you intended between capability and 
principle did not become clear to me until I looked at the very last 
table, which classified AI architectures.  I was initially quite 
surprised to see AIXI listed as principle and Cyc listed as 
capability.


I had read capability - to solve hard problems as meaning the power 
to optimize a utility function, like the sort of thing AIXI does to 
its reward button, which when combined with the unified column would 
designate an AI approach that derived every element by backward 
chaining from the desired environmental impact.  But it looks like you 
meant capability in the sense that the designers had a particular 
hard AI subproblem in mind, like natural language.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-24 Thread Pei Wang

Understood. The distinction isn't explained in the short introduction
at all, and that is why I linked to my paper
http://nars.wang.googlepages.com/wang.AI_Definitions.pdf , which
explains it in a semi-formal manner.

Pei

On 6/24/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Pei Wang wrote:
 Hi,

 I put a brief introduction to AGI at
 http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
 Overview followed by Representative AGI Projects.

This looks pretty good to me.  My compliments.

(And now the inevitable however...)

However, the distinction you intended between capability and
principle did not become clear to me until I looked at the very last
table, which classified AI architectures.  I was initially quite
surprised to see AIXI listed as principle and Cyc listed as
capability.

I had read capability - to solve hard problems as meaning the power
to optimize a utility function, like the sort of thing AIXI does to
its reward button, which when combined with the unified column would
designate an AI approach that derived every element by backward
chaining from the desired environmental impact.  But it looks like you
meant capability in the sense that the designers had a particular
hard AI subproblem in mind, like natural language.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

Thanks for putting this together!  If I were to put myself into your 
theory of AI research, I would probably be roughly included in the 
Structure-AI and Capability-AI (better descriptions of the brain and 
computer programs that have more capabilities).

I haven't heard of a lot of these systems current Capabilities.  A lot of 
them are pretty old--like SOAR and ACT-R.

I tried finding literature on the success of some of these architectures, 
but most of the available literature was in the theory of theories of AI 
category.  The SOAR literature, for example, is massive and mostly focused 
on small independent projects.

Are there large real-world problems that have been solved by these 
systems?  I would find Capability links very useful if they were added.

Bo

On Fri, 22 Jun 2007, Pei Wang wrote:

) Hi,
) 
) I put a brief introduction to AGI at
) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
) Overview followed by Representative AGI Projects.
) 
) It is basically a bunch of links and quotations organized according to
) my opinion. Hopefully it can help some newcomers to get a big picture
) of the idea and the field.
) 
) Pei
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:


Thanks for putting this together!  If I were to put myself into your
theory of AI research, I would probably be roughly included in the
Structure-AI and Capability-AI (better descriptions of the brain and
computer programs that have more capabilities).


It is a reasonable position, though in the long run you may have to
choose between the two, since they often conflict.


I haven't heard of a lot of these systems current Capabilities.  A lot of
them are pretty old--like SOAR and ACT-R.


At the current stage, no AGI system has achieved remarkable
capability. In the list, the ones have most practical applications are
probably Cyc, SOAR, and ACT-R.


I tried finding literature on the success of some of these architectures,
but most of the available literature was in the theory of theories of AI
category.  The SOAR literature, for example, is massive and mostly focused
on small independent projects.


Soar and ACT-R, in their current form, are programming languages and
platforms, in the sense that whoever use them are responsible for
writing models in them. Therefore, to say Soar is general-purpose
is like saying Java is general-purpose --- the system can be applied
in many domains, but each application is indeed a small independent
project. It is already very different from what Newell had in mind at
the beginning of Soar.


Are there large real-world problems that have been solved by these
systems?  I would find Capability links very useful if they were added.


I don't think there is any such solution, though that is not the major
issue they face as AGI projects. As I analyzed in the paper on AI
definitions, they are not designed with Capability as the primary
goal.

Pei


Bo

On Fri, 22 Jun 2007, Pei Wang wrote:

) Hi,
)
) I put a brief introduction to AGI at
) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
) Overview followed by Representative AGI Projects.
)
) It is basically a bunch of links and quotations organized according to
) my opinion. Hopefully it can help some newcomers to get a big picture
) of the idea and the field.
)
) Pei
)
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Lukasz Stafiniak

On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.


For example, the RL book shows how to integrate planning and reactive
reinforcement at the conceptual level.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
)  
)  Thanks for putting this together!  If I were to put myself into your
)  theory of AI research, I would probably be roughly included in the
)  Structure-AI and Capability-AI (better descriptions of the brain and
)  computer programs that have more capabilities).
) 
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog 
in the brain.  I just want to describe the brain in computer language, 
which will require much more advanced programming languages to just get 
computers to simulate things similar to what people can do mentally.

--

)  I haven't heard of a lot of these systems current Capabilities.  A lot of
)  them are pretty old--like SOAR and ACT-R.
) 
) At the current stage, no AGI system has achieved remarkable
) capability. In the list, the ones have most practical applications are
) probably Cyc, SOAR, and ACT-R.

Well, they've been trying to find Capabilities, for example, I'm no ACT-R 
expert at all, but I read a paper about how they are looking for 
correlations between their planner's stack-size and fMRI BOLD signal 
voxels.  This would be a cool Capability in terms of Structural-AI if they 
were able to pull it off.  A simple theory of planning, but slow progress 
toward Structural-AI.

--

)  Are there large real-world problems that have been solved by these
)  systems?  I would find Capability links very useful if they were added.
) 
) I don't think there is any such solution, though that is not the major
) issue they face as AGI projects. As I analyzed in the paper on AI
) definitions, they are not designed with Capability as the primary
) goal.

Hmm..  It seems that even if Capability-AI isn't the primary goal of the 
theory, it must be *one* of the goals.  A Human-Scale thinking system is 
going to have a lot of small milestones of Capability.  If any of these 
systems have reached anything similar to this, which I'm sure many of them 
have because they've been around for 20-30 years.  I'm no expert on any of 
these systems, but I'm just trying to find how successful each has been in 
terms of Capability, which is seems much be at least a distant subgoal of 
all of them.  Even if they are purely theoretical, they must be created 
with the intention of creating other theories that do have Capabilities?!

Bo

) Pei
) 
)  Bo
)  
)  On Fri, 22 Jun 2007, Pei Wang wrote:
)  
)  ) Hi,
)  )
)  ) I put a brief introduction to AGI at
)  ) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
)  ) Overview followed by Representative AGI Projects.
)  )
)  ) It is basically a bunch of links and quotations organized according to
)  ) my opinion. Hopefully it can help some newcomers to get a big picture
)  ) of the idea and the field.
)  )
)  ) Pei
)  )
)  ) -
)  ) This list is sponsored by AGIRI: http://www.agiri.org/email
)  ) To unsubscribe or change your options, please go to:
)  ) http://v2.listbox.com/member/?;
)  )
)  
)  -
)  This list is sponsored by AGIRI: http://www.agiri.org/email
)  To unsubscribe or change your options, please go to:
)  http://v2.listbox.com/member/?;
)  
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 22/06/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.

It is basically a bunch of links and quotations organized according to
my opinion. Hopefully it can help some newcomers to get a big picture
of the idea and the field.

Pei



I like the overview, but I don't think it captures every possible type
of AGI design approach. And may constrain peoples thoughts as to the
possibilities overly.

Mine, I would describe as foundationalist/integrative. That is while
we need to integrate our knowledge of
sensing/planning/natural/reasoning language, this needs to be done in
the correct foundation architecture.

My theory is that the computer architecture has to be more brain-like
than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an architecture that can direct the changing of the
programs by allowing self-directed changes to the stored programs that
are better for following a goal, to persist.

Changes can come from any source (proof, random guess, translations of
external suggestions), so speed of change is not an issue.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Mike Tintner


- Will Pearson: My theory is that the computer architecture has to be 
more brain-like

than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an architecture that can direct the changing of the
programs by allowing self-directed changes to the stored programs that
are better for following a goal, to persist.  Changes can come from any 
source (proof, random guess, translations of

external suggestions), so speed of change is not an issue.


What's the difference between a stored program and the brain's programs that 
allows these self-directed changes to come about? (You seem to be trying to 
formulate something v. fundamental). And what kind of human mental activity 
do you see as evidence of the brain's different kind of programs?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now, they seem
almost co-extensive.
As for the meaning, to me, hybrid means integrated at the level of
engineering, and integrative means integrated, (rather by
synthesis than dominance), at the conceptual level.


I use these two words to distinguish the integration in AGI projects
(e.g., Novamente ...) and the integration in mainstream AI, such as
the works reported in the Integrated Intelligence Special Track of
AAAI, though none of the latter type has reached the level of AGI yet.
Of course, the boundary is not absolute, but the difference is still
quite clear. According to mainstream AI people, all current AI
research may contribute to AGI (since the special-purpose tools can be
integrated), but according to the AGI people, even an integrated
approach should start at the big picture.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:


On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
) 
)  Thanks for putting this together!  If I were to put myself into your
)  theory of AI research, I would probably be roughly included in the
)  Structure-AI and Capability-AI (better descriptions of the brain and
)  computer programs that have more capabilities).
)
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog
in the brain.  I just want to describe the brain in computer language,
which will require much more advanced programming languages to just get
computers to simulate things similar to what people can do mentally.


Sure you can, but this is mostly what I call Structure-AI.
Capability-AI is more about practical problem solving, while whether
the process follows the human-way doesn't matter, as in Deep Blue.


Hmm..  It seems that even if Capability-AI isn't the primary goal of the
theory, it must be *one* of the goals.


Of course. Everyone has practical application in mind, and the
difference is how much priority this goal has, compared with the other
goals.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Pei Wang

On 6/23/07, William Pearson [EMAIL PROTECTED] wrote:


I like the overview, but I don't think it captures every possible type
of AGI design approach. And may constrain peoples thoughts as to the
possibilities overly.


Of course I didn't claim that, and I'm sorry if it is understood that way.

What I listed under Representative AGI Projects are just
AGI-oriented projects with enough materials to be analyzed and
criticized. I surely know that there are many people working on other
ideas, and at the current time it is way too early to say which one
will work.

I just don't think it is possible to list all the possibilities, so
for beginners, the relatively more mature ones are better places to
start. Even if they don't like the ideas (I don't agree with many of
the ideas myself), at least they should know what have been proposed
and explored to certain depth.

I'll be glad to include more and more projects into the list in the
future, as far as they satisfy the criteria set before the list.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 23/06/07, Mike Tintner [EMAIL PROTECTED] wrote:


- Will Pearson: My theory is that the computer architecture has to be
more brain-like
 than a simple stored program architecture in order to allow resource
 constrained AI to implemented efficiently. The way that I am
 investigating, is an architecture that can direct the changing of the
 programs by allowing self-directed changes to the stored programs that
 are better for following a goal, to persist.  Changes can come from any
 source (proof, random guess, translations of
 external suggestions), so speed of change is not an issue.

What's the difference between a stored program and the brain's programs that
allows these self-directed changes to come about? (You seem to be trying to
formulate something v. fundamental).


I think the brains programs have the ability to protect their own
storage from interference from other programs. The architecture will
only allow programs that have proven themselves better* to be able to
override this protection on other programs if they request it.

If you look at the brain it is fundamentally distributed and messy. To
stop errors propagating as they do in stored program architectures you
need something more decentralised than the current attempted
dictatorial kernel control.

It is instructive to look at how the stored program architectures have
been struggling to secure against buffer overruns, to protect against
code that has been inserted subverting the rest of the machine.
Measures that have been taken include the No execute bits on
non-programmatic memory and randomising where programs are stored in
memory so they can't be overwritten. You are even getting to the stage
in trusted computing where you aren't allowed to access certain
portions of memory unless you have the correct cryptographic
credentials. I would rather go another way. If you have some form of
knowledge of what a program is worth embedded in the architecture,
then you should be able to limit these sorts of problems, and allow
more experimentation.

If you try self-modifying and experimenting code on a simple stored
program system, it will generally cause errors and lots of problems,
when things go wrong, as there are no safeguards to what the program
can do. You can lock the experimental code in a sand box, as in
genetic programming, but then it can't replace older code or change
the methods of experimentation. You can also use formal proof, but
then that limits a lot what sources of information you can use as
inspiration for the experiment.

My approach allows an experimental bit of code, if it proves itself by
being useful, to take the place of other code, if it happens to be
coded to take over the function as well.


And what kind of human mental activity
do you see as evidence of the brain's different kind of programs?


Addiction. Or the general goal optimising behaviour of the various
different parts of the brain. That we notice things more if they are
important to us, which implies that our noticing functionality
improves dependent upon what our goal is. Also the general
pervasiveness of the dopaminergic neural system, that I think has an
important function in determining which programs or neural areas are
being useful.

* I shall now get back to how code is determined to be useful.
Interestingly it is somewhat like the credit attribution for how much
work people have done on the agi projects that some people have been
discussing. My current thinking is something like this. There is a
fixed function, that can recognise manifestly good and bad situations,
it provides a value every so often to all the programs than have
control of an output. If things are going well, some food is found,
the value goes up an injury is sustained the value goes down. Basic
reinforcement learning idea.

The value becomes in the architecture a fungible, distributable, but
conserved, resource.  Analogous to money, although when used to
overwrite something it is removed dependent upon hoe useful the
program overwritten was. The outputting programs pass it back to the
programs that have given them they information they needed to output,
whether that information is from long term memory or processed from
the environment. These second tier programs pass it further back.
However the method of determining who gets the credit doesn't have to
always be a simplistic function, they can have heuristics on how to
distribute the utility based on the information they get from each of
its partners. As these heuristics are just part of each program they
can change as well.

So in the end you get an economy of programs that aren't forced to do
anything. Just those that perform well can overwrite those that don't
do so well. It is a very loose constraint on what the system actually
does. On top of this in order to get an AGI you would integrate
everything we know about language, senses, naive physics, mimicry and
other things yet discovered. Also adding the new knowledge we 

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
) 
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?  
For example, removing small parts of the brainstem result in coma.

) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this 
theory--sounds good?  For example, you're referring to multiple tiers of 
organization, which sound like larger scale organizations that maybe have 
been further discussed elsewhere?

It sounds like there are intricate dependency networks that must be 
maintained, for starters.  A lot of supervision and support code that 
does this--or is that evolved in the system also?

--
Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 24/06/07, Bo Morgan [EMAIL PROTECTED] wrote:


On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
)
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent state to deep sleep. Just one that cannot be


) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this
theory--sounds good?  For example, you're referring to multiple tiers of
organization, which sound like larger scale organizations that maybe have
been further discussed elsewhere?


Sorry. It is pretty much all just me at the moment, and the higher
tiers of organisation are just fragments that I know will need to be
implemented or planned for, but have no concrete ideas for at the
moment. I haven't written up everything at the low level either,
because I am not working on this full time. I hope to start a PhD on
it soon, although I don't know where. It will mainly working on the
trying to get a theory of how to design the systems properly, so that
the system will only reward those programs that do well and won't
encourage defectors to spoil what other programs are doing, based on
game theory and economic theory. That is the level I am mainly
concentrating on right now.


It sounds like there are intricate dependency networks that must be
maintained, for starters.  A lot of supervision and support code that
does this--or is that evolved in the system also?


My rule of thumb is to try to put as much as possible into the
changeable/evolving section, but code it by hand to start with if is
needed for the system to start to do some work. The only reason to
keep it on the outside is if the system would be unstable with it on
the inside, e.g. the functions that give out reward.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

Sorry, sent accidentally while half finished.

Bo wrote:

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic changes in the brain. While the brain stem has
dictatorial control over conciousness and activity it does not
necessarily control all activity in the brain in terms of memory and
how it changes. Which is what I am interested in.

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent brain state to deep sleep. Just one that cannot
be stopped in the usual fashion.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] AGI introduction

2007-06-22 Thread Pei Wang

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.

It is basically a bunch of links and quotations organized according to
my opinion. Hopefully it can help some newcomers to get a big picture
of the idea and the field.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Lukasz Stafiniak

On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


Thanks! As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has much generality and some theoretical
accomplishment where Cog is (AFAIK) hand-crafted engineering.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Mike Tintner


Pei: I put a brief introduction to AGI at

http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.


Very helpful. Thankyou.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-22 Thread Pei Wang

On 6/22/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:


As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has much generality and some theoretical
accomplishment where Cog is (AFAIK) hand-crafted engineering.


In many aspects, I agree that SAIL is more interesting than Cog.

I include Cog in the list, because it is explicitly based on a theory
about intelligence as a whole (see
http://groups.csail.mit.edu/lbr/hrg/1998/group-AAAI-98.pdf), while in
SAIL such a theory is not very clear. Of course, this boundary is
fuzzy, so I may include SAIL in a future version of the list,
depending on the development of the project.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Introduction (searching for research, PhD), Open Source AI?

2005-12-22 Thread Mark Horvath
Dear yky and Jiri Jelinek,AGI PROJECTunder unsupervised learning I meant compression of facts, rules, inputs. The similarities derived form the compression can be used as instinctual associations between different ideas, and also to reduce the size of the KB. The basic operation in my system is also pattern recognition, and I am planning to implement reasoning (different logic systems, beginning with 1st order) over the pattern matching layer. (The PRS supports forward and backward chaining.)
I am not familiar with statistical pattern recognition. Does this mean, that each pattern (or condition) has a certain probability based on the probability of the sub patterns in it? If so, that sounds useful. I am palling to implement features like this also on the top of the PRS.
I am using PRS  (well, with chaining a bit more powerful framework), since it seems to be flexible enough for a self-modifying AI, and also the RETE algorithm ensures the efficient executions of rules of which the condition might have came true.
I think our projects are pretty similar (the basic difference is - which I suppose based on your questions -, that you might have based the system on probability, and build pattern matching over that?, while I am doing the opposite). I am interested in your project... 
SAFE AGII agree Jiri Jelinek, I think also, that easier problems should be targeted for first (e.g. reasoning), and then the computationally heavy problems. (And also knowledge grounding is not a problem until one has any sensors, and actuators being textual, visual or any type)
What do you mean under, SIAI visions over the top? For me it seems to be rational. If we suppose the AGI is a rational being, then the goal: Do what you think that I want you to do will control the self improvement also in the right direction. Or am I wrong?
From this point of view also the hierarchical system seems to be unnecessary (although it might be good for the sake of security). And I think if we cant make a single AGI safe, then the hierarchy is also dangerous. The hierarchy might have bigger security form statistical point of view, but if the AGIs in it are more clever than us or each other, that might mean some problem. Is it possible to create a not selfish institute from selfish individuals? (I think they will start to cooperate for more profit)
Best wishes,MárkOn 12/21/05, Yan King Yin [EMAIL PROTECTED]
 wrote:Mark:

MY LITTLE AGI PROJECT

Since I started my studies I was interested in AI and creating AGI, thus I tried to learn as much as possible about various AI disciplines, to unify them later. In my spare time, I am working on a Production Rule System with reasoning abilities, which I plan to program/teach for the usage of various AI techniques (such as EA, RL, classification, unsupervised learning...), to evolve and develop the rules and facts in the system. I am interested about your opinion about a system like this. 


Your system sounds interesting, although it's not an entire AGI framework. You're on the right track trying to unify various AI approaches (such as planning, reasoning, perception, etc). I have some basic ideas of how to build an AGI, but my project is still in its infancy. I'd welcome other researchers to joinmy open sourceproject.


My AGI theory is based on the compression of sensory experience, and the basic operation is pattern recognition. Traditional production rule systems may be a bit too limited because they cannot perform probabilistic inference, or statistical pattern recognition.


Right now we're focusing on vision, which turns out to be extremely hard.

Re your analysis of AGI social issues: I think there should be some sort of built-in AGImechanisms that prevents it from doing harmful things, although the exact form of it is still unclear to me. The folks at SIAI have thought about this issue much more intensely, but I think their vision is a bit over the top.


Secondly I agree that AGImay create more social inequality between those who knows how to exploit AGI and those who are left behind. I'm afraid this is also inevitable. The best we can do is to try to ameliorate such effects. The good side to it is that AGI will be very easy to use because it can understand human language.


Cheers,
yky



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]







To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [agi] Introduction (searching for research, PhD), Open Source AI?

2005-12-18 Thread Jiri Jelinek
Márk,

My suggestion is to first develop AGI (partly open source would be IMO OK - you don't
have to hide everything) and then mess with the moral issues and
related system restrictions for certain groups of users.

BTW I'm optimistic when it comes to our future with AGI. AGI (after some significant learning) can help
us to solve all kinds of problems including the social issues and incompatibilities of value
systems of various subjects (=key source of conflicts).

humans being useless after the creating of AGI, having no job

World will certainly change and there might be some tough
transition stages if we let things go too fast, but eventually, I believe we are
simply not gonna need to have a job. Hard work for machines; fun for
us. They will not mind at all if designed properly.But I really think we better focus on the AGI how to at this point. In my AGI RD, it's mainly the knowledge representation problem at this point.


Sincerely,
Jiri JelinekOn 12/17/05, Mark Horvath [EMAIL PROTECTED] wrote:
Dear AGI People,before asking and writing my opinion about the dangers of AGI, I would like to introduce myself and my activity in AI. MEI studied as Programmer Mathematician in Hungary and in the last year I changed my degree to Artificial Intelligence in Netherlands. I have just finished this degree, and now I am searching for PhD or research position (detailed CV at 
http://people.inf.elte.hu/cybe
r/MarkHorvath_CV_2005.pdf).
MY LITTLE AGI PROJECTSince I started my studies I was interested in AI and creating AGI, thus I tried to learn as much as possible about various AI disciplines, to unify them later. In my spare time, I am working on a Production Rule System with reasoning abilities, which I plan to program/teach for the usage of various AI techniques (such as EA, RL, classification, unsupervised learning...), to evolve and develop the rules and facts in the system. I am interested about your opinion about a system like this.
OPEN SOURCE AGI?I was thinking on open sourcing my project (to collect more people on the project), but you are right, AGI can be dangerous if used cleverly, thus now I am thinking things over again... The things I am thinking on are the following:
 Technical issuesDangers?By my view the primary (technical) danger factor is the usage of AGI to hack computers and spread on the Internet. Do you think on other dangers as well? What else can you do which is even worse than spamming, hacking computers? Do you mean attacking bank system and economy or obtaining huge power with influencing communication? (I'm not familiar enough with stock market, E-commerce and economy.)
Company profit instead of laws of Asimov!!I think one way to be safe against AI is to make it really friendly, and not let anyone to change the main laws, which should include things like making this world better for human. This would make necessary to protect the source. The problem with this that no company could use this AI really usefully, since they goal is to get rich, and not make the world better. The AGI would realize fast, that companies are selfish beings. 
And anyway I agree Sanjay: Power corrupts easily. (See the book Steven Levy: Hackers. Heroes of the Computer Revolution. It writes about the leaders of today's leaders of the commercial computer industry, how free they where thinking before having power...)
Hackers would have advantage
I think several groups will succeed to create AGI
as the computational demands and knowledge about mind will be
sufficient. Thus one can easily find or buy one which can be misused.
If someone would pay enough money for getting world dominance of hacked
computers and AI, that guy would program the AGI
based on literature and open source projects as well. He would obtain
millions of computers very fast. I think the knowledge for creating AGI
is out there already (in pieces). And a big safety problem is that
hackers might have a great advantage of having big computationally
capacity. I hope there is not a single bad guy being abitious ebough...Will bad guys use the newest technology?
On the other hand: How big is this danger? Are you sure that the bad guys would use it? They don't even use AIML
techniques to answer mails and spread viruses with that. They don't use
genetic algorithms to modify the viruses spreading strategy...Source code not necessary to misuse AGIEven if one does not open source the AI, one can use it for bad purposes by simply teaching and ordering it to do bad things (depending on the level of friendliness in it, but I think that will be weak, for the formerly mentioned reason)
A solution to protect us from AGI? (Infecting protection)- Sanjay has written: How to protect general public from misuse of AGI? May be the answer lies in AGI itself - make AGI which can detect such attempts, equip the potential victims with it and let the fight begin on equal ground. Once AGI becomes smarter than humans, only AGI will be able to save humans from itself.
I agree, my 

[agi] Introduction (searching for research, PhD), Open Source AI?

2005-12-17 Thread Mark Horvath
Dear AGI People,before asking and writing my opinion about the dangers of AGI, I would like to introduce myself and my activity in AI. MEI studied as Programmer Mathematician in Hungary and in the last year I changed my degree to Artificial Intelligence in Netherlands. I have just finished this degree, and now I am searching for PhD or research position (detailed CV at 
http://people.inf.elte.hu/cyber/MarkHorvath_CV_2005.pdf).
MY LITTLE AGI PROJECTSince I started my studies I was interested in AI and creating AGI, thus I tried to learn as much as possible about various AI disciplines, to unify them later. In my spare time, I am working on a Production Rule System with reasoning abilities, which I plan to program/teach for the usage of various AI techniques (such as EA, RL, classification, unsupervised learning...), to evolve and develop the rules and facts in the system. I am interested about your opinion about a system like this.
OPEN SOURCE AGI?I was thinking on open sourcing my project (to collect more people on the project), but you are right, AGI can be dangerous if used cleverly, thus now I am thinking things over again... The things I am thinking on are the following:
 Technical issuesDangers?By my view the primary (technical) danger factor is the usage of AGI to hack computers and spread on the Internet. Do you think on other dangers as well? What else can you do which is even worse than spamming, hacking computers? Do you mean attacking bank system and economy or obtaining huge power with influencing communication? (I'm not familiar enough with stock market, E-commerce and economy.)
Company profit instead of laws of Asimov!!I think one way to be safe against AI is to make it really friendly, and not let anyone to change the main laws, which should include things like making this world better for human. This would make necessary to protect the source. The problem with this that no company could use this AI really usefully, since they goal is to get rich, and not make the world better. The AGI would realize fast, that companies are selfish beings. 
And anyway I agree Sanjay: Power corrupts easily. (See the book Steven Levy: Hackers. Heroes of the Computer Revolution. It writes about the leaders of today's leaders of the commercial computer industry, how free they where thinking before having power...)
Hackers would have advantage
I think several groups will succeed to create AGI
as the computational demands and knowledge about mind will be
sufficient. Thus one can easily find or buy one which can be misused.
If someone would pay enough money for getting world dominance of hacked
computers and AI, that guy would program the AGI
based on literature and open source projects as well. He would obtain
millions of computers very fast. I think the knowledge for creating AGI
is out there already (in pieces). And a big safety problem is that
hackers might have a great advantage of having big computationally
capacity. I hope there is not a single bad guy being abitious ebough...Will bad guys use the newest technology?
On the other hand: How big is this danger? Are you sure that the bad guys would use it? They don't even use AIML
techniques to answer mails and spread viruses with that. They don't use
genetic algorithms to modify the viruses spreading strategy...Source code not necessary to misuse AGIEven if one does not open source the AI, one can use it for bad purposes by simply teaching and ordering it to do bad things (depending on the level of friendliness in it, but I think that will be weak, for the formerly mentioned reason)
A solution to protect us from AGI? (Infecting protection)- Sanjay has written: How to protect general public from misuse of AGI? May be the answer lies in AGI itself - make AGI which can detect such attempts, equip the potential victims with it and let the fight begin on equal ground. Once AGI becomes smarter than humans, only AGI will be able to save humans from itself.
I agree, my problem is that I am not sure if people will install the protector AGIs before having real problems. I have an idea to come around this problem, but I am not sure about the legal issues. What if someone writes the hacking AGI before the hackers, and uses it to safe the computers? After hacking a machine it can warn the system administrator/user for the vulnerabilities of the system, and also offer itself to install and protect the vulnerable computer form inside. Is it legal to hack computers  with good purpose in any country?
 My opinion - for the first lookSummarizing the former statements, i think if someone would create AGI now, it would be dangerous to provide it
for anyone, the whole public or only paying customers (the latter seems to be even more dangerous for me). But if
time comes, anyone (and people with money even easier) can have AGI...
 Social issuesAnyway I am more afraid from the social dangers of AI; humans being useless after the creating of AGI, having no job, and at this time we cant 

[agi] Introduction

2005-12-15 Thread Lucas Silva
Hi,
I would like to introduce myself. My name is Lucas Serpa, I am 23
years old from Brazil and I would like to get involved in AGI world
extracting as much knowledge as I can and if possible offer some.


L.Serpa

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Introduction

2005-12-15 Thread Ben Goertzel
Hello Lucas,

Welcome to the AGI list!

Where in Brazil are you located?  I ask because there happen to be a
couple folks working on the Novamente AGI project in Belo Horizonte at
www.vettalabs.com ...

-- Ben



On 12/15/05, Lucas Silva [EMAIL PROTECTED] wrote:
 Hi,
 I would like to introduce myself. My name is Lucas Serpa, I am 23
 years old from Brazil and I would like to get involved in AGI world
 extracting as much knowledge as I can and if possible offer some.


 L.Serpa

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Introduction

2005-12-15 Thread Lucas Silva
I am currently living a few hours from BH in São Paulo.


On 12/15/05, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hello Lucas,

 Welcome to the AGI list!

 Where in Brazil are you located?  I ask because there happen to be a
 couple folks working on the Novamente AGI project in Belo Horizonte at
 www.vettalabs.com ...

 -- Ben



 On 12/15/05, Lucas Silva [EMAIL PROTECTED] wrote:
  Hi,
  I would like to introduce myself. My name is Lucas Serpa, I am 23
  years old from Brazil and I would like to get involved in AGI world
  extracting as much knowledge as I can and if possible offer some.
 
 
  L.Serpa
 
  ---
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription,
  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Introduction

2003-07-07 Thread wanlongbags
Dear Sir,
   Good day!
  This is Wanlongbags company.we are a member of bizeurope,so we got your 
information from them,and we produce all kinds of bags, if you or your friends are 
interested in this line,pls forward this letter to him tks a lot,pls visit our website 
www.wanlongbags.com and feel free to feed back.look forward to cooperating 
with u. 
 otherwise, i am sorry to bother you.
  Best Wishes!
Sincerely Yours,
 Wanlongbags Co. Ltd 
-
200MÐéÄâÖ÷»ú+50MÆóÒµÓʾÖ+¶¥¼¶¹ú¼ÊÓòÃû,Ö»Ðè189Ôª/Äê http://www.kldns.net
¹ú¼ÊÓ¢ÎÄÓòÃû 79Ôª/Äê  Ãâ·ÑÎÞÏ޴νâÎö http://www.kldns.net
»¶Ó­¹âÁÙ¿ÆÁ¦ÍøÂ磺http://www.kldns.net  http://www.cnkl.net
-

..::: JKLMailer PowerBy£ºwww.VRMO.com QQ:5009353 :::.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] introduction

2002-12-10 Thread Michael Roy Ames
Damien Sullivan wrote:
 Hi!  I joined this list recently, figured I'd say who I am.  Well,
 some of you may know already, from extropians, where I used to post a
 fair bit :) or from my Vernor Vinge page.  But now I'm a first year
 comp sci/cog sci PhD student at Indiana University, hoping to work on
 extending Jim Marshall's Metacat in Hofstadter's lab.  Nothing much
 has really happened beyond hope and a few meetings and taking his
 group theory class.  I've been reading Eliezer's _Levels_ pages, and
 having Andy Clark's _Being There_ around, but mostly my life has been
 classes.  Mostly the OS class, actually.  Sigh.

 -xx- Damien X-)

 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription, please go to
 http://v2.listbox.com/member/?[EMAIL PROTECTED]


Damien,

Hi.  I'm am quite interested in Jim's Metacat also.  It's on my To-Do list
to get it running under linux... but the way my workload is going I think
Jim will get his planned re-write done first. :)It would be interesting
to hear about what new directions Metacat is going in.  Welcome to the list.

Michael Roy Ames

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]