Re: [agi] Self-maintaining Architecture first for AI

2008-05-11 Thread William Pearson
2008/5/11 Russell Wallace [EMAIL PROTECTED]:
 On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
 It depends on the system you are designing on. I think you can easily
 create as many types of sand box as you want in programming language E
 (1) for example. If the principle of least authority (2) is embedded
 in the system, then you shouldn't have any problems.

 Sure, I'm talking about much lower-level concepts though. For example,
 on a system with 8 gigabytes of memory, a candidate program has
 computed a 5 gigabyte string. For its next operation, it appends that
 string to itself, thereby crashing the VM due to running out of
 memory. How _exactly_ do you prevent this from happening (while
 meeting all the other requirements for an AI platform)? It's a
 trickier problem than it sounds like it ought to be.


I'm starting to mod qemu (it is not a straightforward process) to add
capabilities.  The VM will have a set amount of memory and if a
location outside this memory is referenced, it will throw a page fault
inside the VM, not crash it directly. The system will be able to deal
with it how it wants to, something smarter than, Oh no I have done a
bad memory reference, I must stop all my work and lose everything!!!
Hopefully.
In the greater scheme of things the model that a computer has
unlimited virtual memory has to go as well. Else you might get
important things on the hard disk and have much thrashing and ephemera
in main memory. You could still make high level abstractions but the
virtual memory one is not the one to display to the low level
programs.
  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-11 Thread Russell Wallace
On Sun, May 11, 2008 at 7:45 AM, William Pearson [EMAIL PROTECTED] wrote:
 I'm starting to mod qemu (it is not a straightforward process) to add
 capabilities.

So if I understand correctly, you're proposing to sandbox candidate
programs by running them in their own virtual PC, with their own
operating system instance? I assume this works recursively, so a
qemu-sandboxed program can itself run qemu?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread William Pearson
2008/5/10 Richard Loosemore [EMAIL PROTECTED]:


 This is still quite ambiguous on a number of levels, so would it be possible
 for you to give us a road map of where the argument is going? At the moment
 I am not sure what the theme is.

That is because I am still ambiguous as to what the later levels are.
A fair amount depends upon the application. For example If you are
trying to build an AI for jet plane you need to be a lot more careful
in how the system explores.

I have in my mind a number of things I can't do with current computers
and would like to experiment with.

1) A system that has programs  that can generate a new learning
program dependent upon the inputs it received. The learning program
would have inputs either from the outside, the outputs of other
learning programs or the value of other learning programs. It would
use a variable number of resources. The outputs could be to learning
systems, or it could be the generation of a new learning program.
Self-maintenance is required to rate and whittle down the learning
programs. The learning program could be anything from a SVM to a
genetic programming system.

2) A system similar to automatic programming that takes descriptions
in a formal language given from the outside and potentially malicious
sources and generates a program from them. The language would be
sufficient to specify new generative elements in and so extensible in
that fashion. A system that cannot maintain itself trying to do this
would quickly get swamped by viruses and the like.

I'd probably create a hybrid of the two, but I am fully aware that I
don't have enough knowledge to discount other approaches. Once I have
got the system working to my satisfaction (both experimentally and by
showing that being good is evolutionarily stable and I have minimised
tragedy of the commons type failures), I'll go on to study more about
higher level problems. I have the (slight) hope that other people will
pick up my system and take it places I can't currently imagine.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 8:38 AM, William Pearson [EMAIL PROTECTED] wrote:
 2) A system similar to automatic programming that takes descriptions
 in a formal language given from the outside and potentially malicious
 sources and generates a program from them. The language would be
 sufficient to specify new generative elements in and so extensible in
 that fashion. A system that cannot maintain itself trying to do this
 would quickly get swamped by viruses and the like.

If I'm understanding you correctly, what you need - or at least one
thing you need - is sandboxing, the ability to execute an arbitrary
program with the assurance that it can't compromise the environment.
This is trickier than it seems; I've been thinking about it on and off
for awhile. Most systems don't even attempt to provide full
sandboxing, the most they try to do is restrict vulnerability to
denial of service - good enough to get by on for a web browser,
probably not good enough for an AI system that might be running
millions of candidate programs over a long, unattended run. And unless
I'm missing something, you can't retrofit it after the fact, e.g. you
can't sandbox Java, you have to go back to C and write your own VM,
garbage collector etc.

I'm told MIT Scheme provides sandboxing, though I haven't had a chance
to try it out. I don't know of any other nontrivial environment that
does so. (Lots of trivial ones do, of course: Corewars, Tierra, an
ICFP contest a couple of years ago etc.)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
 It depends on the system you are designing on. I think you can easily
 create as many types of sand box as you want in programming language E
 (1) for example. If the principle of least authority (2) is embedded
 in the system, then you shouldn't have any problems.

Sure, I'm talking about much lower-level concepts though. For example,
on a system with 8 gigabytes of memory, a candidate program has
computed a 5 gigabyte string. For its next operation, it appends that
string to itself, thereby crashing the VM due to running out of
memory. How _exactly_ do you prevent this from happening (while
meeting all the other requirements for an AI platform)? It's a
trickier problem than it sounds like it ought to be.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Self-maintaining Architecture first for AI

2008-05-09 Thread William Pearson
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-09 Thread Richard Loosemore

William Pearson wrote:

After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.


This is still quite ambiguous on a number of levels, so would it be 
possible for you to give us a road map of where the argument is going? 
At the moment I am not sure what the theme is.


For example, your distinction between active and passive could mean that 
you think we should be building a general learning mechanism, or it 
could mean that you think we should be taking a Generative Programming 
approach to the construction of an AI, or ... probably several other 
meanings.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com