I don't see how you make the leap from complete lack of uncertainty to a
guarantee of survival. Why wouldn't it be a guarantee of self-destruction
instead? That seems much easier to predict, and therefore much less
uncertain.

Evolution tacked on intelligence as an afterthought, to assist an existing,
working system that was not intelligent, but functional. This is why the
overwhelming majority of organisms are not intelligent. The most successful
organisms on earth are those with the least intelligence. Single celled
organisms, insects, and other organisms that follow the "make a lot of
copies and hope for the best" strategy. In contrast, human beings instead
prefer a few sure bets.

So yes, I think you're on to something in terms of what defines
intelligence, but intelligence by itself is an observational phenomenon,
not a behavior. We watch something and learn how it works. That means we're
smart. But until that is coupled with external, goal-oriented behavior,
intelligence is just watching and learning, not doing. Which means even if
the system learns what an optimal behavior strategy for survival is, it
doesn't mean it'll choose it. That is left to choice or will, a different
kind of learning that's behavioral based and takes the outputs of
intelligence as its inputs.

On Fri, Aug 24, 2012 at 3:19 PM, Sergio Pissanetzky
<[email protected]>wrote:

>  Aaron,****
>
> ** **
>
> It doesn't decide what to do with the regularities it finds. The
> regularities are in fact invariant behaviors. The behaviors are obtained by
> removing all entropy, that is, all uncertainties, from the information it
> currently posesses. They are said to be invariant because they are the same
> no matter which of the uncertainties (within the given information)
> actually happens. In other words, they are actions that guarantee survival,
> and they directly activate the actuators. The only drive is survival, but
> not even that is intended. It just happens, because brained individuals
> survive better than non-brained ones. ****
>
> ** **
>
> This is a new notion, you will not find it in any book. You can read a short
> article <http://www.scicontrols.com/SchroedingerCat.htm> I wrote to
> explain these things better. ****
>
> ** **
>
> Sergio****
>
> ** **
>
> *From:* Aaron Hosford [mailto:[email protected]]
> *Sent:* Friday, August 24, 2012 11:00 AM
>
> *To:* AGI
> *Subject:* Re: [agi] Hugo de Garis on the Singhilarity Institute and the
> hopelessness of Friendly AI ...****
>
>  ** **
>
> Humans have a built in animal drive system (emotion & the pleasure/pain
> dichotomy), which works in tandem with the goal-less observation system tha
> constitutes our intelligence. Without drive to give direction and
> precedence to choices of behavior, I don't imagine the intelligence we
> exhibit would actually do anything. We would be difficult to control in the
> way a large boulder is difficult to control -- we would be inert. How does
> the AGI machine you propose decide what to do with the regularities it
> finds in the incoming sensory data? Or is it also inert?****
>
>
>  ****
>
> On Fri, Aug 24, 2012 at 9:40 AM, Sergio Pissanetzky <
> [email protected]> wrote:****
>
> Matt,
>
> Understood. I suggest an entropy approach, based on the observation that
> entropy reduction causes self-organization and the formation of patterns.
> To
> my knowledge, this has never been tried before, except by me. I have reason
> to believe that our brains work that way.
>
> The AGI machine I propose consists of an entropy processor with memory,
> input and output, that's all. No computer, no program, except that almost
> certainly the entropy processor will be a computer programmed for that
> task.
> Completely problem-independent and data-agnostic. Everything else goes in
> as
> data. It works, within my limitations, and I am trying to build a larger
> one
> with an FPGA.
>
> One major difference with current AGI attempts, is that my AGI can not be
> controlled. Your only interaction with it is to give it information. You
> can
> see considerable similarities with humans.****
>
>
> Sergio
>
> -----Original Message-----
> From: Matt Mahoney [mailto:[email protected]]****
>
> Sent: Friday, August 24, 2012 9:17 AM
> To: AGI
> Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the
> hopelessness of Friendly AI ...****
>
> On Fri, Aug 24, 2012 at 9:52 AM, Sergio Pissanetzky <
> [email protected]>
> wrote:
> > No it's not. Because Watson and its program have been developed by
> > humans. I meant Google, as a machine, without any humans writing a
> > program and telling it how to learn to play chess.
>
> So I guess what you want is a machine where you can describe the rules of
> chess or any other game using English words, and it will learn to play the
> game. That's a language modeling problem. It's one of the hard problems of
> AI that we haven't solved yet, along with vision, hearing, robotics, music,
> art, humor, and some others. I have no reason to believe that these
> problems
> won't be solved eventually. It will probably require a lot of computing
> power and a lot of human effort in programming and training. What do you
> suggest?
>
>
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now****
>
> RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> d2****
>
>  Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com****
>
> ** **
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> ****
>
> <http://www.listbox.com/>****
>
> ** **
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to