On Mon, 10 Feb 2003, Ben Goertzel wrote:

> > > A goal in Novamente is a kind of predicate, which is just a
> > function that
> > > assigns a value in [0,1] to each input situation it observes...
> > i.e. it's a
> > > 'valuation' ;-)
> >
> > Interesting. Are these values used for reinforcing behaviors
> > in a learning system? Or are they used in a continuous-valued
> > reasoning system?
>
> They are used for those two purposes, AND others...

Good. In that case the discussion about whether ethics
should be built into Novamente "from the start" fails
to recognize that it already is. Building ethics into
reinforcement values is building them in from the start.

Solomonoff Induction (http://www.idsia.ch/~marcus/kolmo.htm)
provides a good theoretical basis for intelligence, and
in that context behavior is determined by only two things:

1. The behavior of the external world.
2. Reinforcement values.

Real systems include lots of other stuff, but only to
create a computationally efficient approximation to the
behavior of Solomonoff Induction (which is basically
uncomputable). You can try to build ethics into this
"other stuff", but then you aren't "building them in
from the start".

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to