2008/7/2 Vladimir Nesov <[EMAIL PROTECTED]>:
> On Wed, Jul 2, 2008 at 2:48 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>>
>> Okay let us clear things up. There are two things that need to be
>> designed, a computer architecture or virtual machine and programs that
>> form the initial set of programs within the system. Let us call the
>> internal programs vmprograms to avoid confusion.The vmprograms should
>> do all the heavy lifting (reasoning, creating new programs), this is
>> where the lawlful and consistent pressure would come from.
>>
>> It is at source code of vmprograms that all needs to be changeable.
>>
>> However the pressure will have to be somewhat experimental to be
>> powerful, you don't know what bugs a new program will have (if you are
>> doing a non-tight proof search through the space of programs). So the
>> point of the VM is to provide a safety net. If an experiment goes
>> awry, then the VM should allow each program to limit the bugged
>> vmprograms ability to affect it and eventually have it removed and the
>> resources applied to it.
>>
>> Here is a toy scenario where the system needs this ability. *Note it
>> is not anything that is like a full AI but illustrates a facet of
>> something a full AI needs IMO*.
>>
>> Consider a system trying to solve a task, e.g. navigate a maze, that
>> also has a number of different people out there giving helpful hints
>> on how to solve the maze. These hints are in the form of patches to
>> the vmprograms, e.g. changing the representation to 6-dimensional,
>> giving another patch language that has better patches. So the system
>> would make copies of the part of it to be patched and then patch it.
>> Now you could give a patch evaluation module to see which patch works
>> best, but what would happen if the module that implemented that
>> vmprogram wanted to be patched? My solution to the problem is to allow
>> the patch and non-patched version compete in the adhoc economic arena,
>> and see which one wins.
>>
>
> What are the criteria that VM applies to vmprograms? If VM just
> shortcircuits the economic pressure of agents to one another, it in
> itself doesn't specify the direction of the search. The human economy
> works to efficiently satisfy the goals of human beings who already
> have their moral complexity. It propagates the decisions that
> customers make, and fuels the allocation of resources based on these
> decisions. Efficiency of economy is in efficiency of responding to
> information about human goals. If your VM just feeds the decisions on
> themselves, what stops the economy from focusing on efficiently doing
> nothing?
>
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with some purposeful tweaks, trying
to make itself more efficient.

A' has some bugs such that the human notices something wrong with the
system, she gives less credit on average each time A' is helping out
rather than A.

Now A and A' both have to bid for the chance to help program B which
is closer to the outputting (due to the programming of B), B pays a
proportion of the credit it gets back. Now the credit B gets will be
lower when A' is helping, than when A is helping. So A' will get less
in general than A. There are a few scenarios, ordered from quickest
acting to slowest.

1 ) B keeps records of who helps him and sees that A' is not helping
him as well as the average, so no longer lets A' bid. A' resources get
used when it can't keep up bidding for them.
2) A' continues bidding a lot, to outbid A. However the average amount
A' gets is less than it gets back from B. A' bankrupts itself and
other programs use its resources.
3) A' doesn't manage to outbid A' after a fair few trials, so gets the
same fate as it does in scenario 1)

If you start with a bunch of stupid vmprograms, you won't get
anywhere. It can just go to nothingness, you do have to design them
fairly well, just in such a way that that design can change later.

  Will


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to