>
> It is a popular fiction because real AI is complex and hard to build.
> So we guess that a quick shortcut would be to specify a simple utility
> function to control a general purpose learner, and have that learner
> use the magic of intelligence to increase its own intelligence.
> It is a bogus argument. Intelligence depends on knowledge and
> computing power. The system described does not start with many bits of
> knowledge. Nor can it make more bits by rewriting its own code.


We are intelligent, and we do not start with many bits of knowledge. We are
general purpose learners using the magic of intelligence to increase our
own intelligence, and we do it all the time. Computers, mathematics, the
scientific method, etc. These are all behavioral/computational programs
that we have implemented as extension modules of our own minds/bodies to
enhance our intelligence. We are approaching the point of being able to
modify our own DNA, bodies, and neuronal structures as well (though some of
these are further off than others).

We do not need to start with the information. We simply need to observe and
learn -- acquiring the information from the environment. Then we
incorporate that information into our own cognitive structures, whether
they are external or internal to our physical bodies. I am capable of
greater levels of cognition with my computer than without -- it raises my
IQ.

I was not born with the information encoding for computers. Neither were
the designers or builders of my computer. My computer was a later addition,
built using information acquired through real world experience. And yet it
still functions as a design improvement to my original blueprints.

Human beings can also use their combined intelligence from their physical
brains and their computers to design better computers. This is in fact
exactly what is done by major CPU producers. They replace parts of their
cognitive substrate -- the physically external part we call a computer --
with something that results in the combined intelligence of the system as a
whole -- the human and the computer working together. The new computer then
becomes part of the intelligent system that produces the next iteration of
enhancements.

There is nothing mysterious about self-enhancement. There is nothing
special either about improving components that reside inside the
skull/computer case/binary executable versus components that reside in our
hands/peripheral devices/external data or DLLs. They all involve taking the
intelligence of the existing system plus the real world knowledge acquired
by the system through experience to produce a new system or component of
the existing system that raises intelligence. The value added is in the
compression of acquired information into a new, more effective design, not
in the magical creation of new information from thin air.

We can increase the harvest of grain by using automation. Why can't we
increase the harvest of information through automation, as well? What makes
us magically special, and our machines not, that we can't build something
that works according to the same principles as ourselves and is therefore
capable of the same qualities and accomplishments? Why can't our
machines *acquire
*information from the *environment *and incorporate that into
self-enhancements, just as we do ourselves? Why does self-enhancement have
to spring from the magical spontaneous generation of information?




On Tue, Mar 4, 2014 at 11:21 AM, Matt Mahoney <[email protected]>wrote:

> On Tue, Mar 4, 2014 at 11:15 AM, Aaron Hosford <[email protected]>
> wrote:
> > Why not just shape the reward function so that attempts at
> self-modification of it reduce the reward signal drastically?
>
> Because self-modifying goal-seeking AI is a fiction. Practical AI is
> neither.
>
> It is a popular fiction because real AI is complex and hard to build.
> So we guess that a quick shortcut would be to specify a simple utility
> function to control a general purpose learner, and have that learner
> use the magic of intelligence to increase its own intelligence.
>
> It is a bogus argument. Intelligence depends on knowledge and
> computing power. The system described does not start with many bits of
> knowledge. Nor can it make more bits by rewriting its own code.
>
> On 01/03/2014 07:40, Tim Tyler wrote:
> > Part of the problem is terminology. However, it is very useful to have a
> general
> > theory of learning based on reward, utility - or whatever you want to
> call the
> > "goodness" metric. I feel frustrated with the critics; they don't seem
> to get it.
>
> We do have a theory. Hutter proved it is not computable.
>
> Animal brains use an efficiently computable approximation of
> reinforcement learning. When you receive a reward of r or penalty -r,
> you increase the frequency of actions performed at time t before the
> signal in proportion to r/t. It works to the extent that past events
> predict future events with probability depending on the time since the
> last occurrence. But it is not the same as rational goal-seeking
> behavior. If it were, then your desire to take heroin would not depend
> on whether you have tried it in the past.
>
> --
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to