After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.

  Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to