Re: [FRIAM] REPOST: The meaning of inner.

2008-07-20 Thread Jochen Fromm
 If you were to go about programming a computer 
 to think about itself, how would you do it? 

Even if we program a computer to think about
itself, the computer would be extremely bored, 
because he is as intelligent as a cash register 
or washing machine. He just follows commands,
only extremely fast.

You can program a computer to behave like a
complex adaptive system which acts, reacts and
learns. Such a system or agent is able to act
flexible, adapting itself to the environment,
choosing the right action. It has a kind of 
free will, because it can choose the action 
it likes. Here it makes more sense to develop 
software that thinks about itself, but if the 
system can only recognize a few categories, a 
sense of itself is not more than a faint emotion. 
To reach human intelligence, you need a vast 
number of computers, because the brain is 
obviously a huge distributed system. Then 
the interesting question is: can the system 
be aware of itself? 

It sounds paradox, but if we want to enable
a system of computers to think about itself,
we must prevent any detailed self-knowledge.
If we could perceive how our minds work on 
the microscopic level of neurons, we would 
notice that there is no central organizer or 
controller. The illusion of the self would 
probably break down if a brain would be 
conscious of the distributed nature of it's 
own processing. In this sense, self-
consciousness is only possible because the 
true nature of the self is not conscious
to us.. 

The complex adaptive system in question is 
aware of what is doing only indirectly through 
and with the help of the external world. To be 
more precise, the system can only watch its own 
activity on a certain level: on the macroscopic 
level it can recognize macroscopic things, and 
on the microscopic level, it can recognize other 
microscopic things - a neuron can recognize and
react to other neurons - but there is no 
level-crossing awareness of the own activity.

So you have to build a giant system which
consists of a huge number of computers, and
only if it doesn't have the slightest 
idea how it works, it can develop a form
of self-consciousness. And only if you take
a vast number of items - neurons, computers 
or servers - the system is complex enough to 
get the impression that a single item is in 
charge.. 

Quite paradox, isn't it? But there is something 
else we need: the idea of the self must have 
a base, a single item to identify oneself with.

Thus we need two worlds: one mental world 
where the thinking - the complex information 
processing - takes place, and where the system 
is a large distributed network of nodes, and one 
physical world where a single self walks around
and where the system appears to be a single, 
individual item: a person. This physical world 
could also be any virtual world which is complex 
enough to support AI. Each of this worlds could be
be realized by a number of advanced data centers.

There are a number of conditions for both worlds:
The hidden, mental world must be grounded in the 
visible, physical world, it must be complex enough 
to mirror it, and it must be fast enough to react 
instantly. Grounded means we need a 1:infinite 
connection between both worlds. The collective action 
of the hidden system must result in a single 
action of an item in the visible system. 
And a single instant in the visible system must
in turn trigger a collective activity of the
hidden system during perception. Every perception 
and action for the system must pass a single 
point in the visible, physical world. If both 
worlds are complex enough, then this is the 
point where true self-consciousness can emerge.

To summarize, in order to build a computer 
system which is able to think about itself,
we need to separate the thinking from the
self:

(a) a prevention of self-knowledge
which enables self-awareness

(b) a 1:infinite connection between two 
very complex worlds which are in 
coincidence with each other

When we think, certain patterns are brought 
into existence. Since a brain contains more 
than 100 billion neurons, each pattern is a 
vast collection of nearly invisible little 
things or processes. When we think of ourselves, 
a pattern is brought into existence, too. It 
is the identification of a vast collection 
of nearly invisible little items with a 
single thing: yourself.

Except the abstract idea, there is no immaterial
self hovering over hundred billion flickering 
neurons. The idea of a self or soul as the originator 
of the own thoughts is an illusion - but you may 
ask if the self is unreal, then who is reading 
this?. So maybe it is more precise to say that 
the self is a confusing insight or an insightful 
confusion. The essence of self-consciousness 
seems to be this strange combination of insight 
and confusion.

Self-consciousness is both: the strange, short-lived
feeling associated with intricated patterns of 
feedback loops which arise if inconsistent items
are 

Re: [FRIAM] REPOST: The meaning of inner.

2008-07-20 Thread Marcus G. Daniels
Jochen Fromm wrote:
 Since a brain contains more 
 than 100 billion neurons, each pattern is a 
 vast collection of nearly invisible little 
 things or processes. 
For comparison, LANL Roadrunner has about 5 trillion transistors for the 
CPUs (~13000 PowerXCell 8i processors and ~6500 dual core Opterons) and 
another 800 trillion for RAM (~100 TB).   



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] REPOST: The meaning of inner.

2008-07-20 Thread Ken Lloyd
With a little reorganization and forethought, you can even have your own
mini-supercomputer using banks of GPU cards to crunch vectors and matrices.
See Nvidia's CUDA development system, and their Tesla computer system.

- Ken 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Jochen Fromm
 Sent: Sunday, July 20, 2008 12:52 PM
 To: The Friday Morning Applied Complexity Coffee Group
 Subject: Re: [FRIAM] REPOST: The meaning of inner.
 
 Yes, an impressive supercomputer. I think it is much more 
 difficult to use a supercomputer with a trillion operations 
 per second than a huge cluster of ordinary computers, as you 
 can find them in Google's data centers.
 
 -J.
 
 - Original Message -
 From: Marcus G. Daniels [EMAIL PROTECTED]
 To: The Friday Morning Applied Complexity Coffee Group 
 friam@redfish.com
 Sent: Sunday, July 20, 2008 7:49 PM
 Subject: Re: [FRIAM] REPOST: The meaning of inner.
 
 
  For comparison, LANL Roadrunner has about 5 trillion 
 transistors for the
  CPUs (~13000 PowerXCell 8i processors and ~6500 dual core 
 Opterons) and
  another 800 trillion for RAM (~100 TB).
 
 
 
 
 FRIAM Applied Complexity Group listserv
 Meets Fridays 9a-11:30 at cafe at St. John's College
 lectures, archives, unsubscribe, maps at http://www.friam.org



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] REPOST: The meaning of inner.

2008-07-20 Thread Marcus G. Daniels
Jochen Fromm wrote:
 I think it is much more difficult
 to use a supercomputer with a trillion operations per second
 than a huge cluster of ordinary computers, as you can find them
 in Google's data centers.
   
One code for investigating synthetic cognition is called PetaVision.  
This code was adapted to Roadrunner and, like LINPACK, exceeded 1000 
trillion floating point operations a second in recent benchmarks.  
Another project is the Blue Brain project at EPFL.  

Codes like this are usually use MPI (message passing) and often latency 
limited (i.e. transaction speed is limited by the speed of light).   For 
such applications, computers connected with ordinary networking just 
won't scale.   To say it is more difficult to build systems and software 
to cope with that is really just to say they are hard problems. 

The main limitations to silicon system are heat and distance.   Although 
there are multiple layers of circuitry on modern microprocessors (~10), 
nothing like the 3D integration that exists with the brain.

Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] REPOST: The meaning of inner.

2008-07-20 Thread Jochen Fromm
In my opinion, biologically detailed large-scale
models of the brain offer little value if the
system is not embedded in a physical world. It
is a first step in the right direction to examine
vision. The brain is an adaptive system which
becomes useless if it is cut off the environment.
This brain-scale simulation of the neocortex
on the IBM Blue Gene/L supercomputer for
instance has brought little new insight.

Maybe it is useful to understand thalamocortical
oscillations, or to understand how neural assemblies
interact, but if you want to go beyond traditional
AI, maybe it is not recommendable to build a
biologically detailed large-scale model. A human
brain has not only more than 100 billion neurons
but also 100 trillion synapses. It is impossible
to model this in the finest level of detail.

I believe it is possible to achieve human-like
cognitive performance and self-consciousness
with computers, though, in the way I tried to
describe in the first post: if the processing is
parallel enough, if the model is not too biological,
and if the system embedded in some kind of physical
world (whether real or virtual). Maybe also a
RoadRunner which controls an agent in the
successor of SecondLife. Who knows..

Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer
http://www.research.ibm.com/journal/rd/521/djurfeldt.html

-J.
- Original Message - 
From: Marcus G. Daniels [EMAIL PROTECTED]
To: The Friday Morning Applied Complexity Coffee Group friam@redfish.com
Sent: Sunday, July 20, 2008 10:48 PM
Subject: Re: [FRIAM] REPOST: The meaning of inner.



 One code for investigating synthetic cognition is called PetaVision.
 This code was adapted to Roadrunner and, like LINPACK, exceeded 1000
 trillion floating point operations a second in recent benchmarks.
 Another project is the Blue Brain project at EPFL.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org