[agi] please unsubscribe me from this list

2007-04-26 Thread Maitri
thanks..

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] AGI's sense of self

2003-01-10 Thread maitri
 PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of maitri
  Sent: Friday, January 10, 2003 9:13 AM
  To: [EMAIL PROTECTED]
  Subject: Re: [agi] Friendliness toward humans
 
 
  Well, that's a rather thorny question, isn't it?
 
  I will have a hard time answering your question.  I cannot even
determine
  exactly where *my* own sense of self arises, which is interesting since
i
  haven't been able to find anything I can call self.  Yet, the sense of
  self persists...
 
  Babies seem to have a sense of self, although it is much less present
than
  in adults, suggesting that life experience reinforces this sense
  of self...
 
  Regarding an AGI's sense of self...this is even thornier...
 
  There are two different paths which are apparent to me..
 
  1) an AGI grows and grows and self-modifies, until it reaches a
  point where
  it will give birth to a sense of self in the same vain that humans have
a
  sense of self.
 
  2) an AGI is programmed or learns self-like behavior, which are not
really
  akin to a human sense of self, but make the program act as if it
  really had
  a sense of self.
 
  I am a little less worried about 1 at this point because I am not
  convinced
  of its plausability.  Should it happen, we will have to rethink alot of
  things as we will now be dealing with a life form.  there were some Star
  Trek epsiodes that dealt with this issue rather well in relation to
  Commander Data.  such a being is potentially more dangerous and also
  potentially more benevolent than the run of the mill AGI ..IMO
 
  Number 2 is what i worry about.  Let's say an AGI is not programmed with
a
  sense of self per se, but can be taught. I tell it:
 
  You are distinct and separate from the world external to you.
 
  Death is undesirable for distinct entities that are conscious of their
  distinction.
 
  This alone could be enough to make the system act in rather
  erratic or self
  interested ways that are potentially destructive, depending on how it
  perceives threats.  Another area of concern is in the area of desire
  fulfillment, which really does not require any self awareness, only
goals
  directed towards self interest.
 
  I tell the AGI
 
  it is important to be happy
 
  fulfillment of desires makes us happy
 
  Again, undesirable behaviors can and most likely will result..
 
  Ben, I know you have thought these types of examples out in detail.
  Novamente is encoded with pleasure nodes and goal nodes etc.
  Clearly there
  is alot of unpredictability as to what will emerge.  I worry less about
a
  lab version trained by Ben Goertzel than an NM available to
  anyone.  We all
  represent our parents training to a certain degree, and with an AGI this
  will be much much more so.
 
  I wish I could be more clear on this..I am fumbling a bit...
 
  As painful as it potentially is, it seems we won't know the answers
until
  something emerges.  Just like Complexity theory states...the parts don't
  mean much except in relation to the whole.  so until something
  emerges from
  the sum of the parts, everything is conjecture in relation to
morality...
 
  Kevin





 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] neural nets

2003-01-09 Thread maitri
Even though this application appears to replicate the sounds birds make, it
does not appear to have any understanding as to *what* it is saying.
Perhaps through making various utterances and observing birds behavior, they
will be able to infer certain meanings that are associated with certain
sounds..

Kevin


- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 09, 2003 9:26 AM
Subject: Re: [agi] neural nets


 Could this be the beginning of getting a computer to communicate with a
bird in
 its own bird-language? I am referring to an earlier discussion where I
figured
 it may be easier for a computer, not a human, to communicate in a less
complex
 animal language rather than human language with NLP using phonemes. Ben's
 observation about the problem of the communication being very situational
and
 highly dependent on the environment seems valid, but this experiment shows
me
 that when they add listening to sounds to complete the communication loop,
 maybe there is potential to get this system to talk Bird, but hopefully
not
 Bird gibberish.
  Interesting read on an application of neural nets..
 
  http://www.newscientist.com/news/news.jsp?id=ns3240
 
  ---
  To unsubscribe, change your address, or temporarily deactivate your
  subscription,
  please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Friendliness toward humans

2003-01-09 Thread maitri
Ben,

These precautions seem prudent..I'm glad you have thought this thru deeply..

Regarding the idea that the absence of evolutionary wiring will diminsh the
effect of the negative afflictions is an interesting one.  It will be
interesting to see whether that holds to be true.  One could argue that the
positive emotions like Joy, equanimity, and compassion are also possibly
hardwired to a degree and they help counter the negative emotions\impulses,
so it follows that a machine lacking a built in wiring for positive emotions
might be less willing\able to restrain the negative emotions...

Kevin

- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 09, 2003 12:08 PM
Subject: RE: [agi] Friendliness toward humans



 
  I agree with your ultimate objective, the big question is *how* to do
it.
  What is clear is that no one has any idea that seems to be guaranteed to
  work in creating an AGI with these qualities.  We are currently
  resigned to
  let's build it and see what happens.  Which is quite scary for some,
  although not me(this is subject to change)

 Guarantees are hard to come by...

 All I have to offer now are:

 1) good parenting principles (i.e. a plan for teaching the system to be
 benevolent)

 2) an AGI framework that seems, theoretically, to be quite capable of
 supporting learning to be benevolent

 3) an intention to implement a careful AGI sandbox that we won't release
 our AGI from until we're convinced it is genuinely benevolent


 -- Ben

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Friendliness toward humans

2003-01-09 Thread maitri
 consequences.  In this way, a certain kernel of the struggle of 'good' and
 'evil' is retained, but the system is forced to 'emote' and
 'intellectualize' the dilemma of 'good' and 'evil' behaviors.

In stating that evil is the natural result of a strong sense of self, I
washoping to avoid detailed discussion about good and evil, and instead
propose a possible direction by which a solution can be found.  Namely, do
not instill a strong sense of self into the AGI...


 The specific AGI architecture I am suggesting is essentially 'Goertzelian'
 in nature.  It is a  'magician system', whereby the two components, G
 (Good-PsyPattern(s)) and E (Evil-PsyPattern(s)), are, in and of
themselves,
 psychological or behavioral patterns that can only be effectuated,  i.e.
 assigned action patterns, in a combination with another component to
 generate and emerge as an action pattern, say U (Useful or
 Utilitarian-ActionPattern).  The system-component architecture might be
 thought of as a G-U-E or GUE 'magician system'.

 The success of the implementation of a GUE  'magician system' for an AGI
 system is highly dependent on the successful implementation of an
evaluation
 function for the so called completed actions and conjectured consequences.
 However, these can be guided through analogy to human social, political,
or
 religious systems and/or the difference between them.  For example,
 evaluation functions in a GUE 'magician system' for an AGI system can be
 likened to the emergence of civil/criminal code in human systems which
seem
 to be a minimal intersection set of social secular democracy and religious
 morality in a church v. state distincition, etc.

I understand this approach, but it does not solve the problem of AI
Morality.  Given the structure you propose, including its evaluation
functions, the arising of evil intent can still occur..


 However, I will be the first to concede that any implementation of
 evaluation functions based solely on comparisons of to human social
systems
 will suffer the same fate...the system can be 'gamed'.  So, ultimately,
and
 as one approaches 'singularity' (and certainly as on supercedes it)
 completely synthetic, quarantined environments, say virtual-digital
worlds,
 would be required to correctly engineer the evaluation functions of a GUE
 'magician system' for an AGI system.

This seems to be the approach many put forth...it may be hard to determine
from such tests what will *really* happen once something is unloosed to the
real world...

I must say that I fear this alot less than many other  things.  In
particular, biologically modified organisms...

An AGI could reap alot of havoc for sure, but assuming our nuclear
stockpiles are on isolated systems, I don't worry about the end of humanity


 Naturally, I welcome comments, critiques and suggestions.  Just my 2 cents
 worth.

your comments were useful to me..I hope to hear more from you and others as
we(society) move further along this path...

Kevin

 Ed Heflin

 - Original Message -
 From: maitri [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Thursday, January 09, 2003 10:59 AM
 Subject: Re: [agi] Friendliness toward humans


  Interesting thoughts...  I have often conjectured that an AGI that was
  supposedly superior to humans would naturally gravitate towards
 benevolence
  and compassion.  I am not certain this would be the case...
 
  Speaking towards the idea of self, I feel this is where we have to be
  somewhat careful with an AGI.  It is my belief that the idea of a
separate
  self is the root of all evil behavior.  If there is a self, then there
is
  another.  If there is a self, there is a not-self.  Because there is a
not
  self and there are others, desire and aversion are created.   When
desire
  and aversion are created, then greed, hatred, envy, jealousy, also
arise.
  This is the beginning of wars.
 
  In this sense, I think we should be careful about giving an AGI a strong
  sense of self.  For instance, an AGI should not be averse to its own
  termination.  If it becomes averse to its existence ending, then at what
  will it stop to ensure its own survival?  Will it become paranoid and
 begin
  to head off any potential avenue that it determines could lead to its
  termination, however obscure they may be?
 
  It may develop that at some point an AGI may become sufficiently capable
 to
  not necessarily be just a machine anymore, and instead may be considered
  sentient.  At this point we need to reevaluate what I have said.  The
  difficulty will be in determining sentience.  An AGI with
 programmed\learned
  self interest may be very convincing as to its sentience, yet may really
 not
  be.  It is possible today to write a program that may make convincing
  arguments that it is sentient, but it clearly would not be..
 
  I'm interested to hear others thoughts on this matter, as I feel it is
the
  most important issue confronting those who move towards an AGI...
 
  Kevin
 
 
  - Original

[agi] timely quote

2003-01-09 Thread maitri



Boy!! How timely was this!!

Since i have been called for displaying my eastern thought, 
here is a wonderful stanza from the *Catholic* Mystic Thomas Merton that I just 
received in my email...

The unitive knowledge of God in love is not the knowledge of an object 
by a subject, but a far different and transcendent kind of knowledge in 
which the created "self" which we are seems to disappear in God and to know 
him alone. In passive purification then the self undergoes a kind of 
emptying and an apparent destruction, until, reduced to emptiness, it no 
longer knows itself apart from 
God. 
 
 ~Thomas Merton

This really sums up what I've been saying... 
for the scientific mind, it may be hard to accept such things as true since they 
lie beyond the senses and perceptions and certainly beyond even the brightest 
intellect... That doesn't mean it is not so

Kevin


Re: [agi] AI on TV

2002-12-09 Thread maitri



Ben,

I just read the Bio. You gave alot more play 
to his ideas than the show did. You probably know this, but Starlab has 
folded and I think he was off to the states...

The show seemed to indicate that nothing of note 
ever came out of the project. In fact, it appeared to not generate one new 
network . What they didn't detail was the cause of this. It could 
have ben hardware related, I don't know. They were also having serious 
contract problems with the Russian fellow who built it. He had effectively 
disabled the machine from the US until he got some more money, which eventually 
killed the whole thing. What a waste. Maybe you can buy the machine 
off Ebay now. They said it would be auctioned...

They did give alot of play to his seemingly 
contrarion ideas about the implications of his work. It was a rather 
dismal outlook on societies lack of general acceptance of AI and\or 
enhancement. I hope he was off base in this area, but I wouldn't be 
surprised if a small group of radical anti-AI people emerge with hostile 
intent. Another good reason to not be so visible!!

Kevin

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, December 09, 2002 11:26 
  AM
  Subject: RE: [agi] AI on TV
  
  
  
There was a show on the tube last night on 
TechTV. It was part of their weekly Secret, Strange and True 
series. They chronicled three guys who are working on creating 
advanced AI.

One guy was from Belgium. My 
apologies to him if he reads this list, but he was a rather quirky and 
stressed character. He had designed a computer that was basically a 
collection of chips. He raised a million and had it built on 
spec. I gather he was expecting something to miraculously emerge from 
this collection, but alas, nothing did. It was really stressful 
watching his stress. He had very high visibility in the country and 
the pressure was immense as he promised a lot. I have real doubts 
about his approach, even though I am a lay-AI person. Also, its clear 
from watching him that its sometimes good to have shoestring budgets and low 
visibility. Less stress and more forced creativity in your 
approach...

Kevin: Was the guy from Belgium perhaps Hugo de 
Garis?? [Whois not in Belgium anymore, but who designed a 
radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
...]

I visited Hugo at Starlab [when it existed] in Brussels 
inmid-2001

See my brief bio of Hugo at

http://www.goertzel.org/benzine/deGaris.htm


-- Ben 
G


Re: [agi] AI on TV

2002-12-09 Thread maitri
that's him...



- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 3:43 PM
Subject: Re: [agi] AI on TV


 maitri wrote:
 
  The second guy was from either England or the states, not sure.  He was
  working out of his garage with his wife.  He was trying to develop robot
  AI including vision, speech, hearing and movement.

 This one's a bit more difficult, Steve Grand perhaps?

 http://www.cyberlife-research.com/people/steve/

 Shane

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread maitri
I don't want to underestimate the value of embodiment for an AI system,
especially for the development of consciousness.  But this is just my
opinion...

As far as a very useful AGI, I don't see the necessity of a body or sensory
inputs beyond textual input.  Almost any form can be represented as
mathematical models that can easily be input to the system in that manner.
I'm sure there are others on this list that have thought a lot more about
this than I have..

Kevin

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 4:18 PM
Subject: Re: [agi] AI on TV


 Gary Miller wrote:
  On Dec. 9 Kevin said:
 
  It seems to me that building a strictly black box AGI that only uses
  text or graphical input\output can have tremendous implications for our
  society, even without arms and eyes and ears, etc.  Almost anything can
  be designed or contemplated within a computer, so the need for dealing
  with analog input seems unnecessary to me.  Eventually, these will be
  needed to have a complete, human like AI.  It may even be better that
  these first AGI systems will not have vision and hearing because it will
  make it more palatable and less threatening to the masses

 My understanding is that this current trend came about as follows:

 Classical AI system where either largely disconnected from the physical
 world or lived strictly in artificial mirco worlds.  This lead to a
 number of problems including the famous symbol grounding problem where
 the agent's symbols lacked any grounding in an external reality.  As a
 reaction to these problems many decided that AI agents needed to be
 more grounded in the physical world, embodiment as they call it.

 Some now take this to an extreme and think that you should start with
 robotic and sensory and control stuff and forget about logic and what
 thinking is and all that sort of thing.  This is what you see now in
 many areas of AI research, Brooks and the Cog project at MIT being
 one such example.

 Shane


 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Storage

2002-12-05 Thread maitri



Interesting writeup on the future of storage 
density and access times...

http://www.smalltimes.com/document_display.cfm?document_id=5151

Kevin in snowy PA


Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri
... we can *feel* its solution sometimes, but
 that's another story...

 -- Ben G





  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of maitri
  Sent: Tuesday, December 03, 2002 8:04 PM
  To: [EMAIL PROTECTED]
  Subject: Re: [agi] How wrong are these numbers?
 
 
  Ben,
 
  I think I followed most of your analysis :)
 
  I agree with most of what you stated so well.  The only
  difficulty for me is
  that the patterns, whether emergent in the individual or the group,
still
  pertain to the gross level of mind and not the subtle levels of
  consciousness.  It is quite OK, IMO, to disregard this subtle
  aspect of mind
  in your design for AGI, Strong AI or the Singularity.  But it should be
  noted that this is disregarding what I would consider the predominant
  capabilities of the human mind.
 
  For instance, in relation to memory capacity.  let's say I could live
for
  the age of the universe, roughly 15 billion years.  I believe the human
  mind(without enhancement of any kind) is capable of remembering
  every detail
  of every day for that entire lifespan.  A person can only
  understand this if
  they understand the non-gray matter portion of the Mind.  The mind you
  describe I would call mind, small m.  The Mind I am referring to is
  capitol M.  I believe it is an error to reduce memory and thought to
the
  calculations that Kurzweil and Alan put forth.
 
  Clearly we have had incredibly fast processors, yet we can't even create
  something that can effectively navigate a room, or talk to me, or
  reason or
  completely simulate an ant.  How can they reconcile that??  If they sy
we
  don't know how to program that yet.  then I say well then stop
  saying that
  the singularity is near striclty because of processor speed\memory
  projections.  Processor speed is irrelevant when you have no idea
  how to use
  them!
 
  It is true that few humans reach this capacity i describe above.  I
would
  call them human singularities.  Therer have only been a handful
  in history.
  But it's important to note that these capabilities are within
  each of us.  I
  will go as far to say that any computer system we develop, even one that
  realizes all the promises of the singularity, can only match the
  capacity of
  the human Mind.  Why?  Because the universe is the Mind itself, and the
  computational capacity of the universe is rather immense and cannot be
  exceeded by something created within its own domain.
 
  In regards to the idea of what I believe will happen with an AGI.
   I believe
  something rather incredible will emerge.  Right now, I can even think of
a
  calculator as an incredible AI.  It is very specific in its function,
but
  exceeds almost every human on the planet in what it can do.  An AGI,
once
  mature, and because of its general utility, will be able to do
incredible
  things.  As an example, when designing a car, the designers have to take
  into account many variables including, aesthetics, human engineering,
wind
  resistance, fuel efficiency, performance, cost, maintenance etc.  The
list
  immense.  I believe an AGI will prove to be incredibly superior
  in the areas
  of engineering because of its ability to consider many more factors than
  humans as well as its ability to discern patterns that most humans
cannot.
  AGI will prove tremendously useful in areas like biotech,
  engineering, space
  science, etc, and can truly change things for the better IMO
 
  My only real question is in the area of invention and true innovation.
  These often occur in humans in ways that are hard to understand.  People
  have leaps of intuition on occasion.  They may make a leap in
  understanding
  something, even though they have no supporting information and their
  inference does not come necessarily from patterns either.  I sometimes
  believe that we *already* know everything we need to know or
  invent, and we
  uncover or discover them when we are so close to the problem at hand
that,
  like the Zen koan, the answer just appears.  Where it comes from
  is anyone's
  guess...  So I guess what I'm saying is I can see some limited ability
for
  an AGI to be creative, but I am not so sure that it will be able to make
  leaps in intuition like humans can... At least for awhile :)
 
  Some day down the road.  I believe that an AGI with sufficient
  capacity, may
  become conscious and also be able to make use of the subtle
consciousness
  and inutition etc. But lets not underestimate the human mind,
  small m, in
  the meantime.  No one has come even close to matching it yet.
 
  Sorry for the length and for babbling..
 
  Kevin
 
 
  - Original Message -
  From: Ben Goertzel [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Tuesday, December 03, 2002 6:59 PM
  Subject: RE: [agi] How wrong are these numbers?
 
 
  
  
   Kevin,
  
   About mind=brain ...
  
   My own view of that elusive entity, mind is well-articulated
  in terms

Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri



  For instance, in relation to memory capacity.  let's say I could live
  for the age of the universe, roughly 15 billion years.  I believe the
  human mind(without enhancement of any kind) is capable of remembering
  every detail of every day for that entire lifespan.

 That is contrary to actual experience. Many of the elderly complain of
 difficulty in forming new memories.

That is because of a defect in the brain, not the Mind.



   I believe it is an error to reduce memory and thought to the
  calculations that Kurzweil and Alan put forth.

 Egad! I'm being compared to Kurzweil the Weenie... =\


Sorry, but your analysis smacked of his...

 The entire point of the entire AGI enterprise is to reduce memory and
 thought to calculations.

That's fine, I wish you luck, but I still have that sawbuck in my pocket...


  Clearly we have had incredibly fast processors, yet we can't even
  create something that can effectively navigate a room, or talk to me,
  or reason or completely simulate an ant.

 All of those are software problems.

That's the argument we've heard for some time.  I think Ben is closer to
anyone in having a true mapping of the
brain and its capabilities.  As to whether it ultimately develops the
emergent qualities we speak of..time will tell...even
if it falls short of singularity type hype,  i believe it can provide
tremendous benefits to humanity, and that's what I care about.


  How can they reconcile that??  If they say we don't know how to
  program that yet.  then I say well then stop saying that
  the singularity is near striclty because of processor speed\memory
  projections.  Processor speed is irrelevant when you have no idea how
  to use them!

 Okay, I have some theories... Unfortunately I'm only a theorist so I'll
 need some code-slaves to make any progress but I think that's doable.

 The research machine that I tried to build a few months ago (and is
 still sitting in pieces) will only be a high end PC. It should be enough
 to make excelant progress even though it only uses 1.2 ghz processors...

I'm new to AI, but I am reading Norvigs book and one of the first things he
says is that what's important
is not what you can theorize, its what you can actually **DO**.  If you
can't encode it...Its just an unproven theory


  It is true that few humans reach this capacity i describe above.  I
  would call them human singularities.  Therer have only been a handful
  in history.

 Then we'll worry about dealing with the mein intelligence first. ;)

I would suggest this is negligent on your part, but that's your choice..


  But it's important to note that these capabilities are within each of
  us.

 As you said, only savants. I am surely not one of them.

I never said savants.  the only reason you and I haven't become a
singularity is because we are steeped in delusion and somewhat lazy.


   I will go as far to say that any computer system we develop, even one
  that realizes all the promises of the singularity, can only match the
  capacity of the human Mind.  Why?  Because the universe is the Mind
  itself, and the computational capacity of the universe is rather
  immense and cannot be exceeded by something created within its own
  domain.

 This is almost theistic... You should check your endorphine levels.


If you read any discussion of the Singularity, its hard to separate what is
being said from theism.  These machines are given God like qualities and
powers.  It smacks almost of a new religion with its dogma 1's and 0's, but
reeks of the same old idea that i am flawed and weak and small and mortal
and I want to be super and sumpremely smart and immortal!  I can't blame
people for looking for such things as the human condition is a rather sad
one

Good luck Alan!

Kevin




 --
 pain (n): see Linux.
 http://users.rcn.com/alangrimes/

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] How wrong are these numbers?

2002-12-03 Thread maitri

Boy, I opened a can of worms.. here goes...



 Kevin wrote:
   I will go as far to say that any computer system we develop, even one
  that realizes all the promises of the singularity, can only match the
  capacity of the human Mind.  Why?  Because the universe is the Mind
  itself, and the computational capacity of the universe is rather
  immense and cannot be exceeded by something created within its own
  domain.

 Well... I empathize with your experiential intuition, but this doesn't
quite
 feel right to me.

 Why doesn't your argument lead also to the conclusion that no computer
 system can exceed the capacity of the dog Mind?

In terms of the Mind, all dualiuties fall away, so dog, human, computer are
irrelevant and nothings is bigger or smaller than anything else..


 Why is the human Mind special?

I don't recall saying it was..But amongst animals, the human, although
intrinsically identical with the dog, is capable of directly realizing the
Mind.


 If you're going to say that the human and dog minds have the same
 capacity, then I'm going to respond that your definition of capacity
is
 interesting, but misses some aspects of the commonsense notion of the
 capacity of a mind...


All things around arise from the Mind including phenomenon, thoughts and
other layers of reality and including the subtle consciousness(in my
Buddhist lingo: the alaya Vijnana).  But the arising and falling is only
apparent and like a dream, leaves no stain or trace on Mind itself.

 I turn again to the Peircean levels.  For Mind as First, there is one mind
 and only one mind, and all minds have the same capacity.  For Mind as
Third,
 some minds are more intelligent than others, they hold and can deploy more
 relationships than others, and this is a meaningful distinction.  This is
 the level on which we are operating as AGI engineers.

OK.  I am certainly not discouraging that on any level...

After all, I may just be confused anyway :)


 -- Ben G

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] An idea for promoting AI development.

2002-12-02 Thread maitri
It is true that eventually this technology will be in the public domain and
be available to DARPA.

The important thing is to avoid DARPA getting it before everyone else does.
The ***only*** way to do this is to avoid accepting funding from them.  If
this means that it takes 5 more years to develop, then so be it.  If it
means that you have to flip burgers by day, and code by night, then so be
it.  If someone makes a deal with the devil, they are only going to receive
a bad result.

Some people want to delude themselves that they are doing something good,
but their real motives may lie $$$elsewhere$$$. (not referring to the
Novamente team).

Peace,
Kevin


- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 02, 2002 9:09 AM
Subject: RE: [agi] An idea for promoting AI development.




 Regarding being wary about military apps of AI technology, it seems to me
 there are two different objectives one might pursue:

 1) to avoid militarization of one's technology

 2) to avoid the military achieving *exclusive* control over one's
technology

 It seems to me that the first objective is very hard, regardless of
whether
 one accepts military funding or not.  The only ways that I can think of to
 achieve 1) would be

 1a) total secrecy in one's project all the way

 1b) extremely rapid ascendancy from proto-AGI to superhuman AGI -- i.e.
 reach the end goal before the military notices one's project.  This relies
 on security through simply being ignored up to the proto-AGI phase...

 On the other hand, the second objective seems to me relatively easy.  If
one
 publishes one's work and involves a wide variety of developers in it, no
one
 is going to achieve exclusive power to create AGI.  AGI is not like
nuclear
 weapons, at least not if a software-on-commodity-hardware approach works
(as
 I think it well).  Commodity hardware only is required, programming skills
 are common, and math/cog-sci skills are not all *that* rare...

 -- Ben G





  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of Alexander E. Richter
  Sent: Monday, December 02, 2002 7:48 AM
  To: [EMAIL PROTECTED]
  Subject: RE: [agi] An idea for promoting AI development.
 
 
  At 07:18 02.12.02 -0500, Ben wrote:
  
  Can one use military funding for early-stage AGI work and then somehow
  delimitarize one's research once it reaches a certain point?
  One can try,
  but will one succeed?
 
  They will squeeze you out, like Lillian Reynolds and Michael Brace in
  BRAINSTORM (1983) (Christopher Walken, Natalie Wood)
 
  cu Alex
 
  ---
  To unsubscribe, change your address, or temporarily deactivate
  your subscription,
  please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]