2008/6/30 Vladimir Nesov <[EMAIL PROTECTED]>:
> On Mon, Jun 30, 2008 at 10:34 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>>
>> I'm seeking to do something half way between what you suggest (from
>> bacterial systems to human alife) and AI. I'd be curious to know
>> whether you think it would suffer from the same problems.
>>
>> First are we agreed that the von Neumann model of computing has no
>> hidden bias to its problem solving capabilities. It might be able to
>> do some jobs more efficiently than other and need lots of memory to do
>> others but it is not particularly suited to learning chess or running
>> down a gazelle. Which means it can be reprogrammed to do either.
>>
>> However it has no guide to what it should be doing, so can become
>> virus infested or subverted. It has a purpose but we can't explicitly
>> define it. So let us try and put in the most minimal guide that we can
>> so we don't give it a specific goal, just a tendency to favour certain
>> activities or programs.
>
> It is a wrong level of organization: computing hardware is the physics
> of computation, it isn't meant to implement specific algorithms, so I
> don't quite see what you are arguing.
>

I'm not implementing a specific algorithm I am controlling how
resources are allocated. Currently architecture does whatever the
kernel says, from memory allocation to irq allocation. Instead of this
my architecture would allow any program to bid credit for a resource.
The one that bids the most wins and spends its credit. Certain
resources like output memory space, (i.e if the program is controlling
the display or an arm or something) allow the program to specify a
bank, and give the program income.

A bank is a special variable that can't be edited by programs normally
but can be spent. The bank of an outputing program  will be given
credit depending upon how well the system as whole is performing . If
it is doing well the amount of credit it gets would be above average,
poorly it would be below. After a certain time the resources will need
to be bid for again. So credit is coming into the system and
continually being sunk.

The system will be seeded with programs that can perform rudimentarily
well. E.g. you will have programs that know how to deal with visual
input and they will bid for the video camera interupt. They will then
sell their services for credit (so that they can bid for the interrupt
again), to a program that correlates visual and auditory responses.
Who sell their services to a high level planning module etc, on down
to the arm that actually gets the credit.

All these modules are subject to change and re-evaluation. They merely
suggest one possible way for it to be used. It is supposed to be
ultimately flexible. You could seed it with a self-replicating neural
simulator that tried to hook its inputs and outputs up to other
neurons. Neurons would die out if they couldn't find anything to do.

>> How to do this? Form and economy based on
>> reinforcement signals, those that get more reinforcement signals can
>> outbid the others for control of system resources.
>
> Where do reinforcement signals come from? What does this specification
> improve over natural evolution that needed billions of years to get
> here (that is, why do you expect any results in the forseable future)?

Most of the internals are programmed by humans, and they can be
arbitrarily complex. The feedback comes from a human, or from a
utility function although those are harder to define. The architecture
simply doesn't restrict the degrees of freedom that the programs
inside it can explore.

>
>> This is obviously reminiscent of tierra and a million and one other
>> alife system. The difference being is that I want the whole system to
>> exhibit intelligence. Any form of variation is allowed, from random to
>> getting in programs from the outside. It should be able to change the
>> whole from the OS level up based on the variation.
>
> What is your meaning of `intelligence'? I now see it as merely the
> efficiency of optimization process that drives the environment towards
> higher utility, according to whatever criterion (reinforcement, in
> your case). In this view, how does "I'll do the same, but with
> intelligence" differ from "I'll do the same, but better"?
>
Terran's artificial chemistry as whole could not be said to have a
goal. Or to put it another way applying the intentional stance to it
probably wouldn't help you predict what it did next. Applying the
intentional stance to what my system does should help you predict what
it does.

This means he needs to use a bunch more resources to get a singular
useful system. Also the system might not do what he wants, but I don't
think he minds about that.

I'm allowing humans to design everything, just allowing the very low
level to vary. Is this clearer?

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to