RE: [agi] How wrong are these numbers?

2002-12-03 Thread Ben Goertzel

Alan

The next question is: What's your corresponding estimate of processing
power?

To emulate the massively parallel information update rate of the brain on
N bits of memory, how many commodity PC processors are required per GB of
RAM?

Ben G


 Ben Goertzel wrote:
  A short, interesting article on the information capacity of the brain
  was written by nanotechnologist Ralph Merkle; see

  http://www.merkle.com/humanMemory.html

  He gives figures between 10^9 bits and 10^15 bits.

 [ The upper bound, which in my oppinion is unrealistic, that I gave in
 my orrigional post was 1.6*10^10 bits. (8*2gb).]

 Yes, I am aware of those results. His information is based on counting
 synapses. -- I just re-read it... It seems my numbers aren't off base at
 all. What it means is that we are in the 'singularity window' _NOW_.

 To continue the line of thought:

 Lets say that the information content of the output of a cortical column
 can be expressed in firings / EEG period. The wakefull frequency is
 around 30hz. A neuron can fire as fast as 1khz. Doing the division
 yields an output capacity of 33 1/3 bits/EEG cycle. (it is probable that
 there are secondary IO channels for each cortical column).

 --
 pain (n): see Linux.
 http://users.rcn.com/alangrimes/

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] How wrong are these numbers?

2002-12-03 Thread Ben Goertzel


Kevin,

About mind=brain ...

My own view of that elusive entity, “mind is well-articulated in terms of
the philosophy of Charles S. Peirce, who considered there to be several
different levels on which mind could be separately considered.  Peirce used
three levels, but inspired by Jung and others, I have introduced a fourth,
and we prefer to think about:

1.  First, raw experience
2.  Second, physical reaction
3.  Third, relationship and pattern
4.  Fourth, synergy and emergence

Each of these levels constitutes a different perspective on the mind; and
many important mental phenomena can only be understood by considering them
on several different levels.

First corresponds roughly speaking to consciousness.  On this level,
analysis has no more meaning than the color red, and everything is simply
what it presents itself as.  We will not speak about this level further in
this article, except to say that, in the Peircean perspective, it is an
aspect that everything has – even rocks and elementary particles – not just
human brains.

Second, the level of physical reaction, corresponds to the “machinery”
underlying intelligent systems.  In the case of humans, it’s bodies and
brains; in the case of groups of humans, it’s sets of bodies and brains.  In
fact, there’s a strong case to be made that even in the case of “individual”
human minds, the bodies and brains of a whole set of humans is involved.  No
human mind makes sense in isolation; if a human mind is isolated for very
long, it changes into a different sort of thing than an ordinary human  mind
as embedded in society.

Third, the level of relationship and pattern, is the level that is most
commonly associated with the word “mind” in the English language.  One way
of conceiving of the mind is as the set of patterns associated with a
certain physical system.  By “associated with” we mean the patterns in that
system, and the patterns that emerge when one considers that system together
with other systems in its habitual environment.  So, for instance, the human
mind may be considered as the set of patterns in the human brain (both in
its structure, and in its unfolding over time), and the patterns that are
observed when this brain is considered in conjunction with other humans and
its physical environment.  This perspective may justly be claimed
incomplete – it doesn’t capture the experiential aspect of the mind, which
is First; or the physical aspect of the mind, which is Second.  But it
captures a very important aspect of mind, mind as relationship.  This view
of mind in terms of “patterns” may be mathematically formalized, as has been
done in a loose way in my book From Complexity to Creativity.

Fourth, the level of synergy, has to do with groups of patterns that emerge
from each other, in what have been called “networks of emergence.”   A mind
is not just a disconnected bundle of patterns, it’s a complex,
self-organizing system, composed of patterns that emerge from sets of other
patterns, in an interpenetrating way.

The notion of synergy is particularly important in the context of collective
intelligence.  The “mind” of a group of people has many aspects –
experiential, physical, relational and synergetic – but what distinguishes
it from the minds of the people within the group, is specifically the
emergent patterns that exist only when the group is together, and not when
the group is separated and dispersed throughout the rest of society.

One thing all this means is that the number of bits needed to realize a mind
physically, does not equal the number of bits in the mind.  One cannot
reduce mind to the Second level.  The physical substructure of a mind is the
key unlocking the door to a cornucopia of emergent patterns between an
embodied system and its environment (including other embodied systems).
These patterns are the mind, and they contain a lot more information than is
explicit in the number of bits in the physical substrate.

Regarding quantum or quantum gravity approaches to the mind, these are
interesting to me, but from a philosophical perspective they're just
details regarding how the physical universe organizes its patterns... they
don't really affect the above general picture

-- Ben G



-- Ben G


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of maitri
 Sent: Tuesday, December 03, 2002 6:10 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] How wrong are these numbers?


 I've got a sawbuck in my pocket that says that you are seriously
 underestimating the capacity of the human mind.

 In fact, its questionable whether you can emulate a mouse brain adequately
 with that amount of power.  I also think you guys are seriously
 underestimating the memory capacity of the human mind.  Of course, I view
 the fundamental problem with your analysis as the mistaken assumption that
 mind=brain.  There's a lot of anecdotal evidence that indicates
 the error in
 this line of thinking

RE: [agi] How wrong are these numbers?

2002-12-03 Thread Ben Goertzel


Kevin,

You raise a critical point, and  my thinking on this point is a bit
unorthodox, as well as incomplete...

There is a big unanswered question close to the heart of my theory of mind,
and this is the connection between Firstness and Fourthness.  I sum up this
question with the Orwell paraphrase All things are conscious, but some
things are more conscious than others.

I'm a Peircean animist, in the sense that I believe consciousness is
everywhere.  Yet, I believe that I'm more intensely conscious than a dog,
and a dog is more intensely conscious than a flea, which is more intensely
conscious than a virus, which is more intensely conscious than a
molecule

One question is: Why is this?   But I'm not even sure of the standpoint from
which this question Why? is asked.

Another question is: What are the specifics of this law connecting
Firstness with degree-of-integrated-complexity (an aspect of Fourthness)?
This is something that interests me greatly

Along these lines, I believe that if one constructs an AGI with a high
degree of general intelligence, ensuing from a high degree of synergetic
integrated complexity [the only way to get significant general intelligence,
I think], this AGI system *will have* a highly intense degree of
consciousness, analogous to (though with a different subjective quality
from) that of humans.

But I don't have a justification for this belief of mine, because I don't
have a solution to the so-called hard problem of consciousness.  All I
have is an analysis of the hard problem of consciousness, which suggests
that it may be possible to create artificial consciousness by creating AGI
and watching the intense consciousness come along for free.

I suspect that in fact the hard problem will remain in some sense
unsolved.  That is, the qualitative nature of the connection between
intensity of consciousness and degree of general intelligence [for lack of a
better phrase] may remain mysterious.  Yet, by experimenting with
artificial minds, we may learn to quantify this relationship.

Quantify it how?  Various sorts of artificial minds may report their
subjective experiences -- in all sorts of subtle realms of awareness, as
well as their own variety of everyday ordinary consciousness -- and we and
they may learn rules relating their subjective experiences with known
aspects of their physical implementations.

Yet, even when such laws are known -- laws relating aspects of the conscious
experience of a mind to aspects of its brain -- this will still not
resolve the core mystery of how consciousness-mind relates to pattern-mind.
But this mystery is ultimately not a question for science, perhaps... and
we don't need to articulate its solution in order to build and study minds
that are genuinely conscious... we can *feel* its solution sometimes, but
that's another story...

-- Ben G





 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of maitri
 Sent: Tuesday, December 03, 2002 8:04 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] How wrong are these numbers?


 Ben,

 I think I followed most of your analysis :)

 I agree with most of what you stated so well.  The only
 difficulty for me is
 that the patterns, whether emergent in the individual or the group, still
 pertain to the gross level of mind and not the subtle levels of
 consciousness.  It is quite OK, IMO, to disregard this subtle
 aspect of mind
 in your design for AGI, Strong AI or the Singularity.  But it should be
 noted that this is disregarding what I would consider the predominant
 capabilities of the human mind.

 For instance, in relation to memory capacity.  let's say I could live for
 the age of the universe, roughly 15 billion years.  I believe the human
 mind(without enhancement of any kind) is capable of remembering
 every detail
 of every day for that entire lifespan.  A person can only
 understand this if
 they understand the non-gray matter portion of the Mind.  The mind you
 describe I would call mind, small m.  The Mind I am referring to is
 capitol M.  I believe it is an error to reduce memory and thought to the
 calculations that Kurzweil and Alan put forth.

 Clearly we have had incredibly fast processors, yet we can't even create
 something that can effectively navigate a room, or talk to me, or
 reason or
 completely simulate an ant.  How can they reconcile that??  If they sy we
 don't know how to program that yet.  then I say well then stop
 saying that
 the singularity is near striclty because of processor speed\memory
 projections.  Processor speed is irrelevant when you have no idea
 how to use
 them!

 It is true that few humans reach this capacity i describe above.  I would
 call them human singularities.  Therer have only been a handful
 in history.
 But it's important to note that these capabilities are within
 each of us.  I
 will go as far to say that any computer system we develop, even one that
 realizes all the promises of the singularity

Re: [agi] How wrong are these numbers?

2002-12-03 Thread Alan Grimes
 For instance, in relation to memory capacity.  let's say I could live 
 for the age of the universe, roughly 15 billion years.  I believe the 
 human mind(without enhancement of any kind) is capable of remembering 
 every detail of every day for that entire lifespan.

That is contrary to actual experience. Many of the elderly complain of
difficulty in forming new memories.

  I believe it is an error to reduce memory and thought to the
 calculations that Kurzweil and Alan put forth.

Egad! I'm being compared to Kurzweil the Weenie... =\

The entire point of the entire AGI enterprise is to reduce memory and
thought to calculations. 

 Clearly we have had incredibly fast processors, yet we can't even 
 create something that can effectively navigate a room, or talk to me, 
 or reason or completely simulate an ant. 

All of those are software problems.

 How can they reconcile that??  If they say we don't know how to 
 program that yet.  then I say well then stop saying that
 the singularity is near striclty because of processor speed\memory
 projections.  Processor speed is irrelevant when you have no idea how 
 to use them!

Okay, I have some theories... Unfortunately I'm only a theorist so I'll
need some code-slaves to make any progress but I think that's doable. 

The research machine that I tried to build a few months ago (and is
still sitting in pieces) will only be a high end PC. It should be enough
to make excelant progress even though it only uses 1.2 ghz processors...

 It is true that few humans reach this capacity i describe above.  I 
 would call them human singularities.  Therer have only been a handful 
 in history.

Then we'll worry about dealing with the mein intelligence first. ;)

 But it's important to note that these capabilities are within each of 
 us.

As you said, only savants. I am surely not one of them. 

  I will go as far to say that any computer system we develop, even one 
 that realizes all the promises of the singularity, can only match the 
 capacity of the human Mind.  Why?  Because the universe is the Mind 
 itself, and the computational capacity of the universe is rather 
 immense and cannot be exceeded by something created within its own 
 domain.

This is almost theistic... You should check your endorphine levels.


-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri
Ben,

As always, thanks for the well thought out reply...I am glad you could make
some sense of my ramblings...

Just a couple thoughts..

In relation to the subtle consciousness, or store consciousness, I believe
it interpenetrates all things equally.  So to speak of more or less
conscious, from this vantage point, is incorrect.  A wooden doll is
interpenetrated as well, but is not conscious of it because it lacks the
causes and conditions for thought to arise.  This does not negate the
presence of the store consciousness within it.

As far as the connection between the first and the fourth.  I think of the
store consciousness as a sea of potentialities,  When the appropriate causes
and conditions are in place, something will become manifest.  Your writing a
response to me springs from the store consciousness being stimulated by the
higher level consciousness.  Anger, lust, love and compassion, etc all are
potentialities within the SC.  It is useful to think of them as seeds.
whatever seed is watered, that is what will grow.   This is also how species
seemingly in disparate locations can seem to operate as a unit.  A bird in
France figures out how to open milk jugs after the milk man delivers them,
and soon after the birds in Kansas are doing it as well...

In the case of humans, it can be said that even the simple task of buying a
tie is not made without the influence of the collective..

I should state that I do not hold the store consciousness as the absolute
substance underlying all things.  In fact, it cannot be so, because the
store consciousness, although extremely subtle, is itself conditioned and
arises only dependently.  As such, it is not self existent and impermanent
and cannot be the Ultimate suchness of the Universe.

And this reaches the limits of my knowledge on the subject...

Thanks again for your thoughtful dialog.   Here in PA there's no one to talk
to about such things. I'm really a marginal character in society for sure :)

Kevin

- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 03, 2002 8:26 PM
Subject: RE: [agi] How wrong are these numbers?




 Kevin,

 You raise a critical point, and  my thinking on this point is a bit
 unorthodox, as well as incomplete...

 There is a big unanswered question close to the heart of my theory of
mind,
 and this is the connection between Firstness and Fourthness.  I sum up
this
 question with the Orwell paraphrase All things are conscious, but some
 things are more conscious than others.

 I'm a Peircean animist, in the sense that I believe consciousness is
 everywhere.  Yet, I believe that I'm more intensely conscious than a dog,
 and a dog is more intensely conscious than a flea, which is more intensely
 conscious than a virus, which is more intensely conscious than a
 molecule

 One question is: Why is this?   But I'm not even sure of the standpoint
from
 which this question Why? is asked.

 Another question is: What are the specifics of this law connecting
 Firstness with degree-of-integrated-complexity (an aspect of Fourthness)?
 This is something that interests me greatly

 Along these lines, I believe that if one constructs an AGI with a high
 degree of general intelligence, ensuing from a high degree of synergetic
 integrated complexity [the only way to get significant general
intelligence,
 I think], this AGI system *will have* a highly intense degree of
 consciousness, analogous to (though with a different subjective quality
 from) that of humans.

 But I don't have a justification for this belief of mine, because I don't
 have a solution to the so-called hard problem of consciousness.  All I
 have is an analysis of the hard problem of consciousness, which suggests
 that it may be possible to create artificial consciousness by creating AGI
 and watching the intense consciousness come along for free.

 I suspect that in fact the hard problem will remain in some sense
 unsolved.  That is, the qualitative nature of the connection between
 intensity of consciousness and degree of general intelligence [for lack of
a
 better phrase] may remain mysterious.  Yet, by experimenting with
 artificial minds, we may learn to quantify this relationship.

 Quantify it how?  Various sorts of artificial minds may report their
 subjective experiences -- in all sorts of subtle realms of awareness, as
 well as their own variety of everyday ordinary consciousness -- and we and
 they may learn rules relating their subjective experiences with known
 aspects of their physical implementations.

 Yet, even when such laws are known -- laws relating aspects of the
conscious
 experience of a mind to aspects of its brain -- this will still not
 resolve the core mystery of how consciousness-mind relates to
pattern-mind.
 But this mystery is ultimately not a question for science, perhaps...
and
 we don't need to articulate its solution in order to build and study minds
 that are genuinely conscious

Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri



  For instance, in relation to memory capacity.  let's say I could live
  for the age of the universe, roughly 15 billion years.  I believe the
  human mind(without enhancement of any kind) is capable of remembering
  every detail of every day for that entire lifespan.

 That is contrary to actual experience. Many of the elderly complain of
 difficulty in forming new memories.

That is because of a defect in the brain, not the Mind.



   I believe it is an error to reduce memory and thought to the
  calculations that Kurzweil and Alan put forth.

 Egad! I'm being compared to Kurzweil the Weenie... =\


Sorry, but your analysis smacked of his...

 The entire point of the entire AGI enterprise is to reduce memory and
 thought to calculations.

That's fine, I wish you luck, but I still have that sawbuck in my pocket...


  Clearly we have had incredibly fast processors, yet we can't even
  create something that can effectively navigate a room, or talk to me,
  or reason or completely simulate an ant.

 All of those are software problems.

That's the argument we've heard for some time.  I think Ben is closer to
anyone in having a true mapping of the
brain and its capabilities.  As to whether it ultimately develops the
emergent qualities we speak of..time will tell...even
if it falls short of singularity type hype,  i believe it can provide
tremendous benefits to humanity, and that's what I care about.


  How can they reconcile that??  If they say we don't know how to
  program that yet.  then I say well then stop saying that
  the singularity is near striclty because of processor speed\memory
  projections.  Processor speed is irrelevant when you have no idea how
  to use them!

 Okay, I have some theories... Unfortunately I'm only a theorist so I'll
 need some code-slaves to make any progress but I think that's doable.

 The research machine that I tried to build a few months ago (and is
 still sitting in pieces) will only be a high end PC. It should be enough
 to make excelant progress even though it only uses 1.2 ghz processors...

I'm new to AI, but I am reading Norvigs book and one of the first things he
says is that what's important
is not what you can theorize, its what you can actually **DO**.  If you
can't encode it...Its just an unproven theory


  It is true that few humans reach this capacity i describe above.  I
  would call them human singularities.  Therer have only been a handful
  in history.

 Then we'll worry about dealing with the mein intelligence first. ;)

I would suggest this is negligent on your part, but that's your choice..


  But it's important to note that these capabilities are within each of
  us.

 As you said, only savants. I am surely not one of them.

I never said savants.  the only reason you and I haven't become a
singularity is because we are steeped in delusion and somewhat lazy.


   I will go as far to say that any computer system we develop, even one
  that realizes all the promises of the singularity, can only match the
  capacity of the human Mind.  Why?  Because the universe is the Mind
  itself, and the computational capacity of the universe is rather
  immense and cannot be exceeded by something created within its own
  domain.

 This is almost theistic... You should check your endorphine levels.


If you read any discussion of the Singularity, its hard to separate what is
being said from theism.  These machines are given God like qualities and
powers.  It smacks almost of a new religion with its dogma 1's and 0's, but
reeks of the same old idea that i am flawed and weak and small and mortal
and I want to be super and sumpremely smart and immortal!  I can't blame
people for looking for such things as the human condition is a rather sad
one

Good luck Alan!

Kevin




 --
 pain (n): see Linux.
 http://users.rcn.com/alangrimes/

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] How wrong are these numbers?

2002-12-03 Thread Ben Goertzel

Kevin wrote:
  I will go as far to say that any computer system we develop, even one
 that realizes all the promises of the singularity, can only match the
 capacity of the human Mind.  Why?  Because the universe is the Mind
 itself, and the computational capacity of the universe is rather
 immense and cannot be exceeded by something created within its own
 domain.

Well... I empathize with your experiential intuition, but this doesn't quite
feel right to me.

Why doesn't your argument lead also to the conclusion that no computer
system can exceed the capacity of the dog Mind?

Why is the human Mind special?

If you're going to say that the human and dog minds have the same
capacity, then I'm going to respond that your definition of capacity is
interesting, but misses some aspects of the commonsense notion of the
capacity of a mind...

I turn again to the Peircean levels.  For Mind as First, there is one mind
and only one mind, and all minds have the same capacity.  For Mind as Third,
some minds are more intelligent than others, they hold and can deploy more
relationships than others, and this is a meaningful distinction.  This is
the level on which we are operating as AGI engineers.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] How wrong are these numbers?

2002-12-03 Thread Ben Goertzel

Kevin wrote:
  I think Ben is closer to
 anyone in having a true mapping of the
 brain and its capabilities.  As to whether it ultimately develops the
 emergent qualities we speak of..time will tell...even
 if it falls short of singularity type hype,  i believe it can provide
 tremendous benefits to humanity, and that's what I care about.

I appreciate your enthusiasm  support.

I'd like to clarify, however, that I'm not actually trying to map or model
the human brain, but ONLY to emulate (and eventually exceed) its
capabilities.  I have studied neuroscience fairly extensively, but have
chosen to make Novamente very unbrainlike in many ways, in order to adapt it
better to the available hardware platform.

Alan Grimes wrote:
 Okay, I have some theories... Unfortunately I'm only a theorist so I'll
  need some code-slaves to make any progress but I think that's doable.

Code-slaves, huh?

I suggest that you're unlikely to make much progress with this management
philosophy ;-)


Kevin wrote:
 If you read any discussion of the Singularity, its hard to
 separate what is
 being said from theism.  These machines are given God like qualities and
 powers.  It smacks almost of a new religion with its dogma 1's
 and 0's, but
 reeks of the same old idea that i am flawed and weak and small and mortal
 and I want to be super and sumpremely smart and immortal!  I can't blame
 people for looking for such things as the human condition is a rather sad
 one

Well, yeah.  I am flawed and weak and small and mortal, and I want to be
super and supremely smart and immortal.

The interesting thing to come to terms with, is that transforming oneself
into a superbeing is in effect a form of *death*.

One's current self is really disappearing, if one transforms oneself that
completely.  What is the thread of identify/awareness that is left,
surviving such a transition ???

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri

Boy, I opened a can of worms.. here goes...



 Kevin wrote:
   I will go as far to say that any computer system we develop, even one
  that realizes all the promises of the singularity, can only match the
  capacity of the human Mind.  Why?  Because the universe is the Mind
  itself, and the computational capacity of the universe is rather
  immense and cannot be exceeded by something created within its own
  domain.

 Well... I empathize with your experiential intuition, but this doesn't
quite
 feel right to me.

 Why doesn't your argument lead also to the conclusion that no computer
 system can exceed the capacity of the dog Mind?

In terms of the Mind, all dualiuties fall away, so dog, human, computer are
irrelevant and nothings is bigger or smaller than anything else..


 Why is the human Mind special?

I don't recall saying it was..But amongst animals, the human, although
intrinsically identical with the dog, is capable of directly realizing the
Mind.


 If you're going to say that the human and dog minds have the same
 capacity, then I'm going to respond that your definition of capacity
is
 interesting, but misses some aspects of the commonsense notion of the
 capacity of a mind...


All things around arise from the Mind including phenomenon, thoughts and
other layers of reality and including the subtle consciousness(in my
Buddhist lingo: the alaya Vijnana).  But the arising and falling is only
apparent and like a dream, leaves no stain or trace on Mind itself.

 I turn again to the Peircean levels.  For Mind as First, there is one mind
 and only one mind, and all minds have the same capacity.  For Mind as
Third,
 some minds are more intelligent than others, they hold and can deploy more
 relationships than others, and this is a meaningful distinction.  This is
 the level on which we are operating as AGI engineers.

OK.  I am certainly not discouraging that on any level...

After all, I may just be confused anyway :)


 -- Ben G

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]