JIM> I am not able to understand why this is truly relevant to solving the
contemporary problem of AGI.  In my opinion, massive interrelations seem
like a necessity and combinatorial complexity the problem.

SERGIO>  In the post that follows, I try to explain where that complexity
comes from, how it relates to AGI, and how it can be dealt with. I also
explain how, the more efforts are directed towards "explaining" that
complexity or those interrelations, the more the complexity grows and the
more uncertain the interrelations become. The process of dealing with
complexity by programming computers, does not converge. 

 

It is a long post, but it still falls short of answering your question in
full. That, I will do in another post, as I anticipate that the present one
will give rise to some discussion. Here is the new post. 

 

PRESENTATION  OF  08-18-2012. 

 

In a political campaign, one candidate accuses the other of something
terrible. Depending on which side you are, you may react like, "At last! He
smashed him!" or "He has no shame! Such dirt!"  The entire country is now
living in two very different universes, one where the accused is finally
destroyed, the other where he is not. But within hours, the offended
candidate answers the accusation. He survives. He has found an invariant
behavior, one that does not change under a transformation from one universe
to another. Don't we always carry a cat in a box? 

 

The last thing I said in this sequence of presentations, is: "Since you are
using your brain to do all that reasoning, I must conclude that there must
be something in your brain that allows you to do that. My attention,
therefore, immediately switches away from the reasoning itself, and towards
'that' which is in your brain and allows you to reason that way." 

 

This statement puts me in a position where I want to find 'that' what exists
in brains and does the reasoning, but I have promised not to reason myself.
For, if I were to reason about 'that', then the same argument would apply to
my own reasoning. So I can't reason. But I can search, or observe. Of course
reasoning can help to search better, but the essence of searching as opposed
to reasoning is that you find something you can't explain at the time. You
walk in the woods and, unexpectedly, run into a treasure. That's a
discovery. Later, you combine the discovery with additional knowledge, such
as "the pirates had been in the area" and write a book explaining how the
pirates visited the area and hid a treasure and how you found it. 

 

Next, I will explain my search, and findings, in the context of entropy. At
the time of the search, and the findings, I had no idea that entropy had
anything to do with it. For the presentation, it is better to start with
entropy. I want to bring to issue a few well-known concepts and their
not-so-well-known relationships, assuming little background from the reader.
Entropy has vast connotations, but my presentation is restricted to the
context of AGI only. 

 

ENTROPY  AND  INFORMATION

 

The first lesson to be learned about information is that information is a
property of a physical system. Information does not exist by itself. There
is always a physical system or media that carries it. It can be an optical
disk, a computer's memory, a brain's memory. Information can travel. A file
being copied to a computer, a fiber-optic cable that carries television
signals, a beam of light coming from a star and carrying with it the history
of that star, which astronomers can decode. But even when it travels, there
is always something physical that carries it, a neuron, a bit of memory, an
electron travelling in a cable, a beam of light. I like to think of
information as a *modulation* of a physical system. 

 

Information has energy of its own. The energy in information was directly
measured in an experiment conducted only 5 months ago, in March 2012.
Information is measured in bits, energy is measured in Joules, and the
energy of one bit of information is 3 x 10^-21 Joules. If information
travels, the energy goes with it. If it is stored on a media, the energy is
in the media. If you learn something or a computer learns something, the
energy is in your brain or in the computer. There is a law of nature that
energy can not change, it can only flow from one  place to another, and
information is a flow of energy. 

 

Entropy is frequently introduced as a measure of uncertainty in information.
But entropy is also a physical quantity, a property of the state of a
physical system. Any physical system has entropy, and given the state of
that system, the entropy of the system can be calculated as a unique
function of that state. Entropy too can travel, meaning it can flow from one
place to another. There is also a law of nature for entropy, but unlike the
law for energy which bans change, the one for entropy bans only decrease.
The entropy of a system can not decrease. Entropy can flow from one system
to another, and it can not disappear, but it is perfectly possible for the
entropy of a system to increase on its own, even if there is no flow of
entropy entering that system. 

 

The entropy of a system can increase as the result of a process of acquiring
information, for example, learning. The information carries energy, the
influx of energy causes an increase in the entropy of the system, and the
system's uncertainty increases accordingly. Consider now a description of
the system in terms of variables and states (see my earlier post). When
energy flows in, the result is an increase in the number of states and in
the number of possible transitions from one state to another. The increase
in the number of transitions carries with it an increase in the uncertainty
about which transitions will actually take place, (there are more
possibilities), and hence the increase in entropy. This is a thermodynamic
process, and can not be avoided. 

 

Consider Douglas Hofstadter looking at his mother. His brain has just
received signals coming from 100,000,000 dots of light on his retinae. With
the information, also came additional energy, additional entropy, and
additional uncertainty. Hofstadter's brain now has more information, more
energy, more entropy, and more uncertainty. Hofstadter is now more ignorant
and more confused than he was just before he saw his mother. 

 

Think of a developer programming a computer. Perhaps a carefully designed
AGI machine. Of course, the computer with the program is a physical system.
As the developer writes more and more program into it, and makes the
computer "better informed", she is also making it more confused (so much for
the nieceties of our language), and it becomes for her more and more
difficult to "understand" the program. This process is very well known in
software development, I am not making it up. It constitutes a hard limit
that can't be overcome. There is an easy solution known as refactoring. It
is a process where a human developer extracts entropy from the computer,
making it less uncertain, less confused, and more understandable. The
problem: a human is required to do that. A machine can't do it. 

 

If an AGI machine is supposed to have, one day, a human level of
intelligence, then it should be able to do by itself the same that the human
developer does, namely, to extract entropy from its own program. Currently,
I don't know of any efforts, but mine, directed to systematically extract
entropy from computer programs by machine. Leaving the entropy in the
program results in a confused, uncertain  machine, incapable of finding an
invariant behavior as the owner of the cat in the box did. Certainly not an
AGI machine. 

 

Sergio

 

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Friday, August 17, 2012 12:27 PM
To: AGI
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
Schroedinger's cat.

 

Sergio: And no,  there is no limit on time of execution either. This is
still unpublished, so I can only give you a hint. Assuming a neural-network
computer simulation where each element of the causal set is represented by
exactly one individual neuron, and assuming near-neighbor coupling, the time
of execution is constant and independent of size. This is massive
parallelism. I am just curious, are you still with me? May I ask a quiz to
verify? You don't have to answer, the answer is below. Aside from the
obvious fact that this is going to be fast, what is the real, profound
significance of this result? 

 

Jim: I am not able to understand why this is truly relevant to solving the
contemporary problem of AGI.  In my opinion, massive interrelations seem
like a necessity and combinatorial complexity the problem.

Jim Bromer

 

 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | Modify
Your Subscription

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> 

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  


 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> 

 <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to