Aaron, 

 

Your comment turned out to be very profound because it touches on the very
foundations of AGI. I had to write an entire article to respond. Later, I
will add perhaps some examples and more comments, and post it on my website.
The argument is complicated but it is actually pretty basic and you should
be able to understand at least in general lines even if you are not a
physicist. I will respond to your other posts on cycles and causality and
the one on evolution later (remind me if I forget). Here is the article:

 

Recalling what we learned from the Carnot cycle, any flow of energy into (or
out of) a physical system, causes not only its energy but also its entropy
to increase (decrease). The system transitions from a state of lower
(higher) energy and entropy to another of a higher (lower) energy and
entropy. Since energy and entropy are functions of state, the system can
not, by itself, return to its original state. The transformation is
irreversible. If left alone, the energy will remain the same. If the system
is (Aaron's term), then its entropy will also remain the same. But if
internal flows of energy exist in the system, as it would be the case for a
closed system with a dynamics, then the entropy will increase by itself and
will only stabilize when the flows of energy stop. 

 

Information, too, has energy. This statement is the 50 years old Landauer's
principle. The energy of information has been confirmed by direct
measurement for the first time only recently, in March 2012. Information has
entropy, as well. The entropy of information is a measure of the degree of
uncertainty in the information, that is, the degree in which the information
departs from being perfectly causal and deterministic. The lessons from the
Carnot cycle apply to information, and any flow of information will cause
the corresponding increase in entropy, and therefore in uncertainty. Which
brings me to AGI. 

 

An AGI machine is a physical system, it has memory, and it interacts with
its environment. And it also has energy and entropy. When the AGI interacts
with its environment -- for example when a sensor or a sensory organ detects
light or sound -- the AGI stores information from the interaction into its
memory. The acquisition of information causes the state of the AGI to
transition from one of low energy and entropy to another of higher energy
and higher entropy. This new state is more uncertain than the original
state. The more the AGI learns, the more uncertain it becomes. 

 

The same happens to a computer that is being programmed by a developer. A
computer with a program is a physical system with a dynamics determined by
the program. But the more program the developer installs on the computer,
the more uncertain the system computer-program becomes. It is not possible
to make the system more certain, or to cause it to self-organize, by way of
programming, irrespective of how smart ior imaginative the programming is,
or even if the computer-program system is claimed to be an AGI system. This
statement constitutes proof that self-organization, and the causal logic I
am proposing, are uncomputable. 

 

But the uncomputability of self-organization does not make AGI impossible -
after all, our brains work. Does it make AGI more difficult? No, not even
that. It actually makes AGI much easier. Consider an AGI system that is
learning, meaning that it is acquiring more information, more energy, more
entropy, and more uncertainty. The problem is with the entropy/uncertainty.
We would like to remove the excess of entropy. Removing entropy from the AGI
can only be achieved by interaction with another system, say a computer,
provided the interaction involves a flow of energy and entropy from the AGI
to the computer. This last statement encompasses an important conclusion for
AGI: an AGI system must necessarily be a host-guest system, consisting of
two parts, the host, and the guest. The useful information is the guest, the
host acts on the guest to remove its uncertainty. I have proposed this idea
before [ref]. In fact, mostly all attempted AGI systems are host-guest, they
learn and then they process the information in some way. But the devil is in
the details, and this idea brings us to the next, much bigger puzzle. 

 

The rate at which an outgoing flow of energy causes a reduction of the
entropy of the AGI, is variable. At one end of the spectrum, it would be
possible to remove energy by erasing all the information gained by learning.
Doing that would certainly reduce the entropy and uncertainty of the AGI,
but it would also eliminate the useful information. At the other end, it
would desirable to remove entropy preferentially, and leave the useful
information behind, without caring for the energy. This would require a new
type of Maxwell's demon, one that can tell information from entropy, or
certainty from uncertainty, and still not violate the Second Law of
Thermodynamics. A transformation like that would preserve the causal
deterministic behavior of the system, which is the certainty, while
eliminating as much uncertainty as possible. 

 

A transformation like that is known as a behavior-preserving transformation.
It is commonplace in Software Engineering, where it is known as refactoring.
Every developer practices refactoring nearly all of the time. By definition,
refactoring is a transformation of the code - that is, of the useful
information, the information that determines the dynamics of the computer -
such that its behavior is preserved. This remark suggests that the Maxwell's
demon we are seeking does exist. The trouble is, it appears to only exist in
the brain. 

 

My contribution to AGI is the discovery by experimental observation and
artificial replication of the Maxwell's demon in question [paper in
Complexity] This subject, I will continue in another article. 

 

Sergio

 

From: Aaron Hosford [mailto:[email protected]] 
Sent: Friday, August 24, 2012 3:48 PM
To: AGI
Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the
hopelessness of Friendly AI ...

 

I don't see how you make the leap from complete lack of uncertainty to a
guarantee of survival. Why wouldn't it be a guarantee of self-destruction
instead? That seems much easier to predict, and therefore much less
uncertain.

 

Evolution tacked on intelligence as an afterthought, to assist an existing,
working system that was not intelligent, but functional. This is why the
overwhelming majority of organisms are not intelligent. The most successful
organisms on earth are those with the least intelligence. Single celled
organisms, insects, and other organisms that follow the "make a lot of
copies and hope for the best" strategy. In contrast, human beings instead
prefer a few sure bets.

 

So yes, I think you're on to something in terms of what defines
intelligence, but intelligence by itself is an observational phenomenon, not
a behavior. We watch something and learn how it works. That means we're
smart. But until that is coupled with external, goal-oriented behavior,
intelligence is just watching and learning, not doing. Which means even if
the system learns what an optimal behavior strategy for survival is, it
doesn't mean it'll choose it. That is left to choice or will, a different
kind of learning that's behavioral based and takes the outputs of
intelligence as its inputs.

On Fri, Aug 24, 2012 at 3:19 PM, Sergio Pissanetzky <[email protected]>
wrote:

Aaron,

 

It doesn't decide what to do with the regularities it finds. The
regularities are in fact invariant behaviors. The behaviors are obtained by
removing all entropy, that is, all uncertainties, from the information it
currently posesses. They are said to be invariant because they are the same
no matter which of the uncertainties (within the given information) actually
happens. In other words, they are actions that guarantee survival, and they
directly activate the actuators. The only drive is survival, but not even
that is intended. It just happens, because brained individuals survive
better than non-brained ones. 

 This is a new notion, you will not find it in any book. You can read a
short article <http://www.scicontrols.com/SchroedingerCat.htm>  I wrote to
explain these things better. 

 

Sergio

 

From: Aaron Hosford [mailto:[email protected]] 
Sent: Friday, August 24, 2012 11:00 AM 


To: AGI
Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the
hopelessness of Friendly AI ...

 

Humans have a built in animal drive system (emotion & the pleasure/pain
dichotomy), which works in tandem with the goal-less observation system tha
constitutes our intelligence. Without drive to give direction and precedence
to choices of behavior, I don't imagine the intelligence we exhibit would
actually do anything. We would be difficult to control in the way a large
boulder is difficult to control -- we would be inert. How does the AGI
machine you propose decide what to do with the regularities it finds in the
incoming sensory data? Or is it also inert?


 

On Fri, Aug 24, 2012 at 9:40 AM, Sergio Pissanetzky <[email protected]>
wrote:

Matt,

Understood. I suggest an entropy approach, based on the observation that
entropy reduction causes self-organization and the formation of patterns. To
my knowledge, this has never been tried before, except by me. I have reason
to believe that our brains work that way.

The AGI machine I propose consists of an entropy processor with memory,
input and output, that's all. No computer, no program, except that almost
certainly the entropy processor will be a computer programmed for that task.
Completely problem-independent and data-agnostic. Everything else goes in as
data. It works, within my limitations, and I am trying to build a larger one
with an FPGA.

One major difference with current AGI attempts, is that my AGI can not be
controlled. Your only interaction with it is to give it information. You can
see considerable similarities with humans.


Sergio

-----Original Message-----
From: Matt Mahoney [mailto:[email protected]]

Sent: Friday, August 24, 2012 9:17 AM
To: AGI
Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the
hopelessness of Friendly AI ...

On Fri, Aug 24, 2012 at 9:52 AM, Sergio Pissanetzky <[email protected]>
wrote:
> No it's not. Because Watson and its program have been developed by
> humans. I meant Google, as a machine, without any humans writing a
> program and telling it how to learn to play chess.

So I guess what you want is a machine where you can describe the rules of
chess or any other game using English words, and it will learn to play the
game. That's a language modeling problem. It's one of the hard problems of
AI that we haven't solved yet, along with vision, hearing, robotics, music,
art, humor, and some others. I have no reason to believe that these problems
won't be solved eventually. It will probably require a lot of computing
power and a lot of human effort in programming and training. What do you
suggest?


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
Modify Your Subscription:
https://www.listbox.com/member/? <https://www.listbox.com/member/?&;> &
d2

Powered by Listbox: http://www.listbox.com <http://www.listbox.com/> 





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4

Modify Your Subscription: https://www.listbox.com/member/?
<https://www.listbox.com/member/?&;> & 


Powered by Listbox: http://www.listbox.com <http://www.listbox.com/> 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | Modify
Your Subscription

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> AGI |
Archives | Modify Your Subscription 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to