On Wed, Jul 2, 2008 at 2:48 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>
> Okay let us clear things up. There are two things that need to be
> designed, a computer architecture or virtual machine and programs that
> form the initial set of programs within the system. Let us call the
> internal programs vmprograms to avoid confusion.The vmprograms should
> do all the heavy lifting (reasoning, creating new programs), this is
> where the lawlful and consistent pressure would come from.
>
> It is at source code of vmprograms that all needs to be changeable.
>
> However the pressure will have to be somewhat experimental to be
> powerful, you don't know what bugs a new program will have (if you are
> doing a non-tight proof search through the space of programs). So the
> point of the VM is to provide a safety net. If an experiment goes
> awry, then the VM should allow each program to limit the bugged
> vmprograms ability to affect it and eventually have it removed and the
> resources applied to it.
>
> Here is a toy scenario where the system needs this ability. *Note it
> is not anything that is like a full AI but illustrates a facet of
> something a full AI needs IMO*.
>
> Consider a system trying to solve a task, e.g. navigate a maze, that
> also has a number of different people out there giving helpful hints
> on how to solve the maze. These hints are in the form of patches to
> the vmprograms, e.g. changing the representation to 6-dimensional,
> giving another patch language that has better patches. So the system
> would make copies of the part of it to be patched and then patch it.
> Now you could give a patch evaluation module to see which patch works
> best, but what would happen if the module that implemented that
> vmprogram wanted to be patched? My solution to the problem is to allow
> the patch and non-patched version compete in the adhoc economic arena,
> and see which one wins.
>

What are the criteria that VM applies to vmprograms? If VM just
shortcircuits the economic pressure of agents to one another, it in
itself doesn't specify the direction of the search. The human economy
works to efficiently satisfy the goals of human beings who already
have their moral complexity. It propagates the decisions that
customers make, and fuels the allocation of resources based on these
decisions. Efficiency of economy is in efficiency of responding to
information about human goals. If your VM just feeds the decisions on
themselves, what stops the economy from focusing on efficiently doing
nothing?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to