Re: [agi] Pure reason is a disease.

2007-05-20 Thread Jiri Jelinek

Hi Mark,


AGI(s) suggest solutions  people decide what to do.

1.  People are stupid and will often decide to do things that will kill

large numbers of people.

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


I'm proposing a layered defense strategy Force the malevolent

individual to navigate multiple defensive layers and you better the
chances of detecting and stopping him.

Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


2.  The AGI will, regardless of what you do,
fairly shortly be able to take actions on it's own.


Without feelings, it cannot prefer = won't do a thing on its own.


More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.
It's going to happen whether you like it or not.


Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


The fact that the AGI can keep digging through (and keep fixing) its

data very systematically doesn't solve the time constraint and
deadline problems.

Sure, there will be limitations. But if an AGI gets

a) start scenario
b) target scenario
c) User-provided rules to follow
d) System-config based rules to follow (e.g don't use knowledge
marked [security_marking] when generation solutions for members of
'user_role_name' role)
e) deadline

then it can just show the first valid solution found, or say something
like Sorry, can't make it + a reason (e.g. insufficient
knowledge/time or thought broken by info access restriction)


And multiple layers of defense make it harder to hack.  Your arguments

conflict with each other.

When talking about hacking, I meant unauthorized access and/or
modifications of AGI's resources. Considering current technology,
there are many standard ways for multi-layer security. When it comes
to generating safe system responses to regular user-requests then
see above. Being busy with the knowledge representation issues, I did
not figure out the exact implementation of the security marking
algorithm yet. It might get tricky and I don't think I'll find
practical hints in emotions. To some extent it might be handled by
selected users.


Look at it this way.  Your logic says that if you can build this perfect

shining AGI on a hill -- that everything will be OK.  My emotions say that
there is far too much that can go awry if you depend upon *everything* that
you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .

Playing with powerful tools always includes risks. More and more
powerful tools will be developed. If we cannot deal with it then we
don't deserve future. But I'm optimistic. Hopefully, AGI will get it
right when asked to help us to figure out how to make sure we deserve
it ;-)

Regards,
Jiri Jelinek

On 5/16/07, Mark Waser [EMAIL PROTECTED] wrote:

 AGI should IMO focus on
 a) figuring out how to reach given goals, instead of
 b) trying to guess if users want something else than
 what they actually asked for.

Absotively, positutely.  b) is a recipe for disaster and my biggest gripe
with Eliezer Yudkowski and the SIAI.

 What is unsafe to show sometimes depends
 on the level of details Figuring out the safe level of detail is not
 always easy and another
 problem is that smart users could break malevolent goals into separate
 tasks so that [at least the first generation] AGIs wouldn't be able to
 detect it even when following your emotion-related rules. The users
 could be using multiple accounts so even if all those tasks are given
 to a single instance of an AGI, it might not be able to notice the
 master plan. So is it dangerous? Sure, it is..

Yes, nothing is fool-proof given a sufficiently talented fool.  That's why
I'm proposing a layered defense strategy.  Don't allow a single point of
failure before the world goes kaboom!  Force the malevolent individual to
navigate multiple defensive layers and you better the chances of detecting
and stopping him.

 AGI is potentially very powerful tool, but what we do with it is up to
 us.

Nope.  It's up to the eight billion plus morons who will access it.
Actually, it's up to itself when some really bright fool modifies it in a
certain way.

2. non-optimally stored and integrated knowledge
 Then you want to fix the cause by optimizing  integrating instead of
 solving symptoms by adding backup searches.

You clearly don't get the operating in the limited dirty time-constrained
world thing.  Building knowledge in the real world always leaves a trail of
incomplete and unintegrated knowledge.  Yes, the builder always follows
behind and gathers more knowledge and integrates better -- but the real
world also includes time constraints and deadlines for action.  This isn't
AIXI we're talking about.  In a perfect world, 

[agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread A. T. Murray
University graduate students in computer science, linguistics, 
psychology, neuroscience and so on need a suitable topic for 
that scholarly contribution known as a Ph.D. dissertation. 
The SourceForge Mind project in artificial intelligence, 
on the other hand, needs entree into the academic AI literature. 
Why not start your academic career with a blockbuster dissertation? 

Think back to Erwin Schrodinger writing his equation in 1926. 
He got the idea from a French physicist who had recently defended 
his doctoral dissertation. Talk about the impact of a dissertation -- 
it literally had the bang of an atomic bomb. 

Your impact, Dr. Science, could be even greater. Your Ph.D. thesis 
could trigger the doomsday scenario of the Technological Singularity. 
The Singularity is near, but it hasn't happened yet because you 
have not yet submitted your bestseller-book-quality dissertation. 

CRITICAL MASS 

The AI Manhattan Project will not get off the ground until we 
assemble a scientific infrastructure of experts trained in the 
theory and practice of constructing artificially intelligent minds 
for robots. A few prototypes such as http://AIMind-I.com and 
http://mind.sourceforge.net/Mind.html are already out there, 
but we need a pre-Cambrian explosion of virally proliferating 
Mind versions if there is to be a Darwinian eco-system of AI 
Minds racing through the Internet and engaging in the mortal 
competition ending in the survival of the fittest. Your book 
qua Ph.D. dissertation may suceed where AI4U has failed -- 
as the Gutenberg Bible of the arrival of True AI-Complete. 

You may start by simply publishing a few scholarly papers on 
open-source artificial intelligence. Magazine and newspaper 
articles may flow from you, but the real target is academia. 
You are permitted -- and in fact it is your duty -- to take 
a critical stance towards the extraordinary scientific claims 
made when the Mind project asserts that AI has been solved, 
but you should shy away from embarassing yourself through 
woefully ignorant Mentifex-bashing such as happened with the 
http://www.advogato.org/article/928.html Advogato Has Failed 
debacle, where the author could not himself discredit Mentifex 
and so he ignorantly cited two attacks on Mentifex that were 
actually written by one and the same Internet cyberstalker. 
We want here a growing tree of scientific illumination, not 
a chain of thoughtless me-too ad-hominem sniper attacks. 

Above all avoid the endless, non-productive jawboning about 
artificial intelligence such as occurs year in and year out at 
http://www.mail-archive.com/agi@v2.listbox.com and other 
forums where blowhard discussants quibble about the AI climate 
but never write any code that advances the state of the art. 

So develop a thesis and run it by your faculty advisor. 
Stay aloof from the Mind project to keep your independence. 
When the facts are in and your case is made, publish and 
become a Philosophiae Doctor -- a teacher of philosophy.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Pure reason is a disease.

2007-05-20 Thread Mark Waser

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


Generally, there are no rules -- it's normally just the best judgment of a 
single individual.



Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


Sure.  There's the logic layer and the emotion layer.  Even if the logic 
layer get convinced, the emotion layer is still there to say Whoa.  Hold on 
a minute.  Maybe I'd better run this past some other people . . . .


Note also, I'm not trying to detect a malevolent individual.  I'm trying to 
prevent facilitating an action that could be harmful.  I don't care about 
whether the individual is malevolent or stupid (though, in later stages, 
malevolence detection probably would be a good idea so as to possibly deny 
the user unsupervised access to the system).



Without feelings, it cannot prefer = won't do a thing on its own.


Nope.  Any powerful enough system is going to have programmed goals which it 
then will have to interpret and develop subgoals and a plan of action. 
While it may not have set the top-level goal(s), it certainly is operating 
on it's own.



Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


We don't have to necessarily mess up.  I can walk a high-wire if you give me 
two hand-rails.  But not putting the hand-rails in place would be suicide 
for me.



c) User-provided rules to follow


The crux of the matter.  Can you specify rules that won't conflict with each 
other and which cover every contingency?


If so, what is the difference between them and an unshakeable attraction or 
revulsion?


   Mark

- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 20, 2007 4:14 AM
Subject: Re: [agi] Pure reason is a disease.



Hi Mark,


AGI(s) suggest solutions  people decide what to do.

1.  People are stupid and will often decide to do things that will kill

large numbers of people.

I wonder how vague are the rules used by major publishers to decide
what is OK to publish.


I'm proposing a layered defense strategy Force the malevolent

individual to navigate multiple defensive layers and you better the
chances of detecting and stopping him.

Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact that a particular user is
highly interested in malevolent stuff doesn't mean he is bad guy.


2.  The AGI will, regardless of what you do,
fairly shortly be able to take actions on it's own.


Without feelings, it cannot prefer = won't do a thing on its own.


More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.

The point is -- you're not going to get a vote.
It's going to happen whether you like it or not.


Unless we mess up, our machines do what we want.
I don't think we necessarily have to mess up.


The fact that the AGI can keep digging through (and keep fixing) its

data very systematically doesn't solve the time constraint and
deadline problems.

Sure, there will be limitations. But if an AGI gets

a) start scenario
b) target scenario
c) User-provided rules to follow
d) System-config based rules to follow (e.g don't use knowledge
marked [security_marking] when generation solutions for members of
'user_role_name' role)
e) deadline

then it can just show the first valid solution found, or say something
like Sorry, can't make it + a reason (e.g. insufficient
knowledge/time or thought broken by info access restriction)


And multiple layers of defense make it harder to hack.  Your arguments

conflict with each other.

When talking about hacking, I meant unauthorized access and/or
modifications of AGI's resources. Considering current technology,
there are many standard ways for multi-layer security. When it comes
to generating safe system responses to regular user-requests then
see above. Being busy with the knowledge representation issues, I did
not figure out the exact implementation of the security marking
algorithm yet. It might get tricky and I don't think I'll find
practical hints in emotions. To some extent it might be handled by
selected users.


Look at it this way.  Your logic says that if you can build this perfect

shining AGI on a hill -- that everything will be OK.  My emotions say that
there is far too much that can go awry if you depend upon *everything* 
that

you say you're depending upon *plus* everything that you don't realize
you're depending upon *plus* . . .

Playing with powerful tools always includes risks. More and more
powerful tools will be developed. If we cannot deal with it then we
don't deserve future. But I'm optimistic. Hopefully, AGI will get it
right when asked to help us to figure out how to make sure we deserve
it ;-)

Regards,
Jiri Jelinek

On 5/16/07, Mark Waser [EMAIL 

[agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

Hi all,

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a topic that normally bores me to tears, but it occurred to me that part of
my reply might be of interest to some
on this list, because it pertains to the more general issue of the
relationship btw consciousness and intelligence.

It also ties in with the importance of thinking about efficient
intelligence rather than just raw intelligence, as
discussed in the recent thread on definitions of intelligence.

Here is the relevant part of my reply about Searle:


However, a key point is: The scenario Searle describes is likely not
physically possible, due to the unrealistically large size of the rulebook.
The structures that we associate with intelligence (will, focused awareness,
etc.) in a human context, all come out of the need to do intelligent
processing within modest space and time requirements.

So when we say we feel like the {Searle+rulebook} system isn't really
understanding Chinese, what we mean is: It isn't understanding Chinese
according to the methods we are used to, which are methods adapted to deal
with modest space and time resources.

This ties in with the relationship btw intensity-of-consciousness and
degree-of-intelligence.  In real life, these seem often to be tied together,
because the cognitive structures that correlate with intensity of
consciousness are useful ones for achieving intelligent behaviors.

However, Searle's example is pathological in the sense that it posits a
system with a high degree of intelligence associated with a functionality
that is NOT associated with any intensity-of-consciousness.  But I suggest
that this pathology is due to the unrealistically large amount of computing
resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and
intensity-of-consciousness to be correlated.  The fact that this correlation
breaks in a pathological, physically-impossible case that requires
dramatically much resources, doesn't mean too much...


Note that I write about intensity of consciousness rather than presence of
consciousness.  I tend toward panpsychism but I do accept that while all
animals are conscious, some animals are more conscious than others (to
pervert Orwell).  I have elaborated on this perspective considerably in The
Hidden Pattern.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
I'm probably not answering your question but have been thinking more on all
this.

There's the usual thermodynamics stuff and relativistic physics that is
going on with intelligence and flipping bits within this universe, verses
the no-friction universe or Newtonian setup.

But what I've been thinking and this is probably just reiterating what
someone else has worked through but basically a large part of intelligence
is chaos control, chaos feedback loops, operating within complexity.
Intelligence is some sort of delicate multi-vectored balancing act between
complexity and projecting, manipulating, storing/modeling, NN training,
genetic learning of the chaos and applying chaos in an environment and
optimizing it's understanding and application of.  The more intelligent, the
better handle an entity has on the chaos.  An intelligent entity can have
maximal effect with minimal energy expenditure on its environment in a
controlled manner; intelligence (or the application of) or even perhaps
consciousness is the real-time surfing of buttery effects.

So efficient intelligence involves thermodynamic power differentials of
resource consumption applied to goals, etc.  A goal would be expressed
similarly to intelligence formulae.  Really efficient means good chaos
leverage understanding cycles, systems, entropy goings on over time and
maximizing effect with minimal I/O control for goal achievement while
utilizing the KR and the entity's resources... 

John


 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 I guess people want intelligence to be useful, not just complex :-)
 
 This raises a question.  Suppose you had a very large program consisting
 of
 random instructions.  Such a thing would have high algorithmic
 complexity, but
 most people would not say that such a thing was intelligent (depending
 on
 their favorite definition).  But how would you know?  If you didn't know
 how
 the code was generated, then how would you know that the program was
 really
 random and didn't actually solve some very hard class of problems?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
I liked most of your points, but . . . . 

 However, Searle's example is pathological in the sense that it posits a 
 system with a high degree of intelligence associated with a functionality 
 that is NOT associated with any intensity-of-consciousness.  But I suggest 
 that this pathology is due to the unrealistically large amount of computing 
 resources that the rulebook requires.  

Not by my definition of intelligence (which requires learning/adaptation).


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 1:24 PM
  Subject: [agi] Relationship btw consciousness and intelligence



  Hi all,

  Someone emailed me recently about Searle's Chinese Room argument, 

  http://en.wikipedia.org/wiki/Chinese_room

  a topic that normally bores me to tears, but it occurred to me that part of 
my reply might be of interest to some 
  on this list, because it pertains to the more general issue of the 
relationship btw consciousness and intelligence.

  It also ties in with the importance of thinking about efficient 
intelligence rather than just raw intelligence, as 
  discussed in the recent thread on definitions of intelligence.

  Here is the relevant part of my reply about Searle:

  
  However, a key point is: The scenario Searle describes is likely not 
physically possible, due to the unrealistically large size of the rulebook.  
The structures that we associate with intelligence (will, focused awareness, 
etc.) in a human context, all come out of the need to do intelligent processing 
within modest space and time requirements.  

  So when we say we feel like the {Searle+rulebook} system isn't really 
understanding Chinese, what we mean is: It isn't understanding Chinese 
according to the methods we are used to, which are methods adapted to deal with 
modest space and time resources.

  This ties in with the relationship btw intensity-of-consciousness and 
degree-of-intelligence.  In real life, these seem often to be tied together, 
because the cognitive structures that correlate with intensity of consciousness 
are useful ones for achieving intelligent behaviors.

  However, Searle's example is pathological in the sense that it posits a 
system with a high degree of intelligence associated with a functionality that 
is NOT associated with any intensity-of-consciousness.  But I suggest that this 
pathology is due to the unrealistically large amount of computing resources 
that the rulebook requires.  

  I.e., it is finitude of resources that causes intelligence and 
intensity-of-consciousness to be correlated.  The fact that this correlation 
breaks in a pathological, physically-impossible case that requires dramatically 
much resources, doesn't mean too much...
  

  Note that I write about intensity of consciousness rather than presence of 
consciousness.  I tend toward panpsychism but I do accept that while all 
animals are conscious, some animals are more conscious than others (to pervert 
Orwell).  I have elaborated on this perspective considerably in The Hidden 
Pattern. 

  -- Ben G 
--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Oops heh I was eating French toast as I wrote this -

intelligence (or the application of) or even perhaps consciousness is the
real-time surfing of buttery effects

I meant butterfly effects.

John

 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Sunday, May 20, 2007 11:45 AM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 I'm probably not answering your question but have been thinking more on
 all
 this.
 
 There's the usual thermodynamics stuff and relativistic physics that is
 going on with intelligence and flipping bits within this universe,
 verses
 the no-friction universe or Newtonian setup.
 
 But what I've been thinking and this is probably just reiterating what
 someone else has worked through but basically a large part of
 intelligence
 is chaos control, chaos feedback loops, operating within complexity.
 Intelligence is some sort of delicate multi-vectored balancing act
 between
 complexity and projecting, manipulating, storing/modeling, NN training,
 genetic learning of the chaos and applying chaos in an environment and
 optimizing it's understanding and application of.  The more intelligent,
 the
 better handle an entity has on the chaos.  An intelligent entity can
 have
 maximal effect with minimal energy expenditure on its environment in a
 controlled manner; intelligence (or the application of) or even perhaps
 consciousness is the real-time surfing of buttery effects.
 
 So efficient intelligence involves thermodynamic power differentials of
 resource consumption applied to goals, etc.  A goal would be expressed
 similarly to intelligence formulae.  Really efficient means good chaos
 leverage understanding cycles, systems, entropy goings on over time and
 maximizing effect with minimal I/O control for goal achievement while
 utilizing the KR and the entity's resources...
 
 John
 
 
  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  I guess people want intelligence to be useful, not just complex :-)
 
  This raises a question.  Suppose you had a very large program
 consisting
  of
  random instructions.  Such a thing would have high algorithmic
  complexity, but
  most people would not say that such a thing was intelligent (depending
  on
  their favorite definition).  But how would you know?  If you didn't
 know
  how
  the code was generated, then how would you know that the program was
  really
  random and didn't actually solve some very hard class of problems?
 
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

Sure... I prefer to define intelligence in terms of behavioral functionality
rather than internal properties, but you are free to define it differently
;-)

I note that if the Chinese language changes over time, then the {Searle +
rulebook} system will rapidly become less intelligent in this context 

ben g

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:


 I liked most of your points, but . . . .

 However, Searle's example is pathological in the sense that it posits *a
system with a high degree of intelligence* associated with a functionality
that is NOT associated with any intensity-of-consciousness.  But I suggest
that this pathology is due to the unrealistically large amount of computing
resources that the rulebook requires.

Not by my definition of intelligence (which requires learning/adaptation).

 - Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, May 20, 2007 1:24 PM
*Subject:* [agi] Relationship btw consciousness and intelligence


Hi all,

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a topic that normally bores me to tears, but it occurred to me that part
of my reply might be of interest to some
on this list, because it pertains to the more general issue of the
relationship btw consciousness and intelligence.

It also ties in with the importance of thinking about efficient
intelligence rather than just raw intelligence, as
discussed in the recent thread on definitions of intelligence.

Here is the relevant part of my reply about Searle:


However, a key point is: The scenario Searle describes is likely not
physically possible, due to the unrealistically large size of the rulebook.
The structures that we associate with intelligence (will, focused awareness,
etc.) in a human context, all come out of the need to do intelligent
processing within modest space and time requirements.

So when we say we feel like the {Searle+rulebook} system isn't really
understanding Chinese, what we mean is: It isn't understanding Chinese
according to the methods we are used to, which are methods adapted to deal
with modest space and time resources.

This ties in with the relationship btw intensity-of-consciousness and
degree-of-intelligence.  In real life, these seem often to be tied together,
because the cognitive structures that correlate with intensity of
consciousness are useful ones for achieving intelligent behaviors.

However, Searle's example is pathological in the sense that it posits a
system with a high degree of intelligence associated with a functionality
that is NOT associated with any intensity-of-consciousness.  But I suggest
that this pathology is due to the unrealistically large amount of computing
resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and
intensity-of-consciousness to be correlated.  The fact that this correlation
breaks in a pathological, physically-impossible case that requires
dramatically much resources, doesn't mean too much...


Note that I write about intensity of consciousness rather than presence of
consciousness.  I tend toward panpsychism but I do accept that while all
animals are conscious, some animals are more conscious than others (to
pervert Orwell).  I have elaborated on this perspective considerably in The
Hidden Pattern.

-- Ben G
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Eliezer S. Yudkowsky
Why is Murray allowed to remain on this mailing list, anyway?  As a 
warning to others?  The others don't appear to be taking the hint.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Benjamin Goertzel

Personally, I find many of his posts highly entertaining...

If your sense of humor differs, you can always use the DEL key ;-)

-- Ben G

On 5/20/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:


Why is Murray allowed to remain on this mailing list, anyway?  As a
warning to others?  The others don't appear to be taking the hint.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

Intelligence, to me, is the ability to achieve complex goals...

This is one way of being functional  a paperclip though is very
functional yet not very intelligent...

ben g


On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:


  Sure... I prefer to define intelligence in terms of behavioral
functionality rather than internal properties, but you are free to define it
differently ;-)

I wouldn't call learning/adaptability an internal(-only) property . . . .

 I note that if the Chinese language changes over time, then the {Searle
+ rulebook} system will rapidly become less intelligent in this context 

See.  Now this indicates the funkiness of your definition . . . . Replace
intelligent with functional and it makes a lot more sense.

Actually, that raises a good question -- What is the difference between
your intelligent and your functional?

- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, May 20, 2007 2:11 PM
*Subject:* Re: [agi] Relationship btw consciousness and intelligence


Sure... I prefer to define intelligence in terms of behavioral
functionality rather than internal properties, but you are free to define it
differently ;-)

I note that if the Chinese language changes over time, then the {Searle +
rulebook} system will rapidly become less intelligent in this context 

ben g

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:

  I liked most of your points, but . . . .

  However, Searle's example is pathological in the sense that it posits
 *a system with a high degree of intelligence* associated with a
 functionality that is NOT associated with any intensity-of-consciousness.
 But I suggest that this pathology is due to the unrealistically large amount
 of computing resources that the rulebook requires.

 Not by my definition of intelligence (which requires
 learning/adaptation).

  - Original Message -
 *From:* Benjamin Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Sunday, May 20, 2007 1:24 PM
 *Subject:* [agi] Relationship btw consciousness and intelligence


 Hi all,

 Someone emailed me recently about Searle's Chinese Room argument,

 http://en.wikipedia.org/wiki/Chinese_room

 a topic that normally bores me to tears, but it occurred to me that part
 of my reply might be of interest to some
 on this list, because it pertains to the more general issue of the
 relationship btw consciousness and intelligence.

 It also ties in with the importance of thinking about efficient
 intelligence rather than just raw intelligence, as
 discussed in the recent thread on definitions of intelligence.

 Here is the relevant part of my reply about Searle:

 
 However, a key point is: The scenario Searle describes is likely not
 physically possible, due to the unrealistically large size of the rulebook.
 The structures that we associate with intelligence (will, focused awareness,
 etc.) in a human context, all come out of the need to do intelligent
 processing within modest space and time requirements.

 So when we say we feel like the {Searle+rulebook} system isn't really
 understanding Chinese, what we mean is: It isn't understanding Chinese
 according to the methods we are used to, which are methods adapted to deal
 with modest space and time resources.

 This ties in with the relationship btw intensity-of-consciousness and
 degree-of-intelligence.  In real life, these seem often to be tied together,
 because the cognitive structures that correlate with intensity of
 consciousness are useful ones for achieving intelligent behaviors.

 However, Searle's example is pathological in the sense that it posits a
 system with a high degree of intelligence associated with a functionality
 that is NOT associated with any intensity-of-consciousness.  But I suggest
 that this pathology is due to the unrealistically large amount of computing
 resources that the rulebook requires.

 I.e., it is finitude of resources that causes intelligence and
 intensity-of-consciousness to be correlated.  The fact that this correlation
 breaks in a pathological, physically-impossible case that requires
 dramatically much resources, doesn't mean too much...
 

 Note that I write about intensity of consciousness rather than presence
 of consciousness.  I tend toward panpsychism but I do accept that while all
 animals are conscious, some animals are more conscious than others (to
 pervert Orwell).  I have elaborated on this perspective considerably in The
 Hidden Pattern.

 -- Ben G
 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


--
This list is sponsored by AGIRI: http://www.agiri.org/email
To 

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
Allow me to paraphrase . . . .

Something is intelligent if it is functional over a wide variety of complex 
goals.

Is that a reasonable shot at your definition?
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 2:41 PM
  Subject: Re: [agi] Relationship btw consciousness and intelligence



  Intelligence, to me, is the ability to achieve complex goals...

  This is one way of being functional  a paperclip though is very 
functional yet not very intelligent...

  ben g



  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 Sure... I prefer to define intelligence in terms of behavioral 
functionality rather than internal properties, but you are free to define it 
differently ;-)

I wouldn't call learning/adaptability an internal(-only) property . . . . 

 I note that if the Chinese language changes over time, then the {Searle 
+ rulebook} system will rapidly become less intelligent in this context  

See.  Now this indicates the funkiness of your definition . . . . Replace 
intelligent with functional and it makes a lot more sense.

Actually, that raises a good question -- What is the difference between 
your intelligent and your functional?
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 2:11 PM 
  Subject: Re: [agi] Relationship btw consciousness and intelligence



  Sure... I prefer to define intelligence in terms of behavioral 
functionality rather than internal properties, but you are free to define it 
differently ;-)

  I note that if the Chinese language changes over time, then the {Searle + 
rulebook} system will rapidly become less intelligent in this context  

  ben g


  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: 
I liked most of your points, but . . . . 

 However, Searle's example is pathological in the sense that it 
posits a system with a high degree of intelligence associated with a 
functionality that is NOT associated with any intensity-of-consciousness.  But 
I suggest that this pathology is due to the unrealistically large amount of 
computing resources that the rulebook requires.  

Not by my definition of intelligence (which requires 
learning/adaptation).


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 1:24 PM 
  Subject: [agi] Relationship btw consciousness and intelligence



  Hi all,

  Someone emailed me recently about Searle's Chinese Room argument, 

  http://en.wikipedia.org/wiki/Chinese_room

  a topic that normally bores me to tears, but it occurred to me that 
part of my reply might be of interest to some 
  on this list, because it pertains to the more general issue of the 
relationship btw consciousness and intelligence.

  It also ties in with the importance of thinking about efficient 
intelligence rather than just raw intelligence, as 
  discussed in the recent thread on definitions of intelligence.

  Here is the relevant part of my reply about Searle:

  
  However, a key point is: The scenario Searle describes is likely not 
physically possible, due to the unrealistically large size of the rulebook.  
The structures that we associate with intelligence (will, focused awareness, 
etc.) in a human context, all come out of the need to do intelligent processing 
within modest space and time requirements.  

  So when we say we feel like the {Searle+rulebook} system isn't really 
understanding Chinese, what we mean is: It isn't understanding Chinese 
according to the methods we are used to, which are methods adapted to deal with 
modest space and time resources.

  This ties in with the relationship btw intensity-of-consciousness and 
degree-of-intelligence.  In real life, these seem often to be tied together, 
because the cognitive structures that correlate with intensity of consciousness 
are useful ones for achieving intelligent behaviors.

  However, Searle's example is pathological in the sense that it posits 
a system with a high degree of intelligence associated with a functionality 
that is NOT associated with any intensity-of-consciousness.  But I suggest that 
this pathology is due to the unrealistically large amount of computing 
resources that the rulebook requires.  

  I.e., it is finitude of resources that causes intelligence and 
intensity-of-consciousness to be correlated.  The fact that this correlation 
breaks in a pathological, physically-impossible case that requires dramatically 
much resources, doesn't mean too much...
  

  Note that I write about intensity of consciousness rather than 
presence of consciousness.  I tend toward panpsychism but I do accept that 
while all animals 

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

Sure, that's fine...

I mean: I have given a mathematical definition before, so all these verbal
paraphrases
should be viewed as rough approximations anyway...

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:


 Allow me to paraphrase . . . .

Something is intelligent if it is functional over a wide variety of
complex goals.

Is that a reasonable shot at your definition?

- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, May 20, 2007 2:41 PM
*Subject:* Re: [agi] Relationship btw consciousness and intelligence


Intelligence, to me, is the ability to achieve complex goals...

This is one way of being functional  a paperclip though is very
functional yet not very intelligent...

ben g


On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:

   Sure... I prefer to define intelligence in terms of behavioral
 functionality rather than internal properties, but you are free to define it
 differently ;-)

 I wouldn't call learning/adaptability an internal(-only) property . . .
 .

  I note that if the Chinese language changes over time, then the
 {Searle + rulebook} system will rapidly become less intelligent in this
 context 
 See.  Now this indicates the funkiness of your definition . . . .
 Replace intelligent with functional and it makes a lot more sense.

 Actually, that raises a good question -- What is the difference between
 your intelligent and your functional?

 - Original Message -
 *From:* Benjamin Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
  *Sent:* Sunday, May 20, 2007 2:11 PM
 *Subject:* Re: [agi] Relationship btw consciousness and intelligence


 Sure... I prefer to define intelligence in terms of behavioral
 functionality rather than internal properties, but you are free to define it
 differently ;-)

 I note that if the Chinese language changes over time, then the {Searle
 + rulebook} system will rapidly become less intelligent in this context 


 ben g

  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:

   I liked most of your points, but . . . .
 
   However, Searle's example is pathological in the sense that it
  posits *a system with a high degree of intelligence* associated with a
  functionality that is NOT associated with any intensity-of-consciousness.
  But I suggest that this pathology is due to the unrealistically large amount
  of computing resources that the rulebook requires.
 
  Not by my definition of intelligence (which requires
  learning/adaptation).
 
   - Original Message -
  *From:* Benjamin Goertzel [EMAIL PROTECTED]
  *To:* agi@v2.listbox.com
  *Sent:* Sunday, May 20, 2007 1:24 PM
  *Subject:* [agi] Relationship btw consciousness and intelligence
 
 
  Hi all,
 
  Someone emailed me recently about Searle's Chinese Room argument,
 
  http://en.wikipedia.org/wiki/Chinese_room
 
  a topic that normally bores me to tears, but it occurred to me that
  part of my reply might be of interest to some
  on this list, because it pertains to the more general issue of the
  relationship btw consciousness and intelligence.
 
  It also ties in with the importance of thinking about efficient
  intelligence rather than just raw intelligence, as
  discussed in the recent thread on definitions of intelligence.
 
  Here is the relevant part of my reply about Searle:
 
  
  However, a key point is: The scenario Searle describes is likely not
  physically possible, due to the unrealistically large size of the rulebook.
  The structures that we associate with intelligence (will, focused awareness,
  etc.) in a human context, all come out of the need to do intelligent
  processing within modest space and time requirements.
 
  So when we say we feel like the {Searle+rulebook} system isn't really
  understanding Chinese, what we mean is: It isn't understanding Chinese
  according to the methods we are used to, which are methods adapted to deal
  with modest space and time resources.
 
  This ties in with the relationship btw intensity-of-consciousness and
  degree-of-intelligence.  In real life, these seem often to be tied together,
  because the cognitive structures that correlate with intensity of
  consciousness are useful ones for achieving intelligent behaviors.
 
  However, Searle's example is pathological in the sense that it posits
  a system with a high degree of intelligence associated with a functionality
  that is NOT associated with any intensity-of-consciousness.  But I suggest
  that this pathology is due to the unrealistically large amount of computing
  resources that the rulebook requires.
 
  I.e., it is finitude of resources that causes intelligence and
  intensity-of-consciousness to be correlated.  The fact that this correlation
  breaks in a pathological, physically-impossible case that requires
  dramatically much resources, doesn't mean too much...
  
 
  Note that I write about intensity of consciousness rather than
  presence of consciousness.  I tend toward 

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
Rough approximations maybe . . . . but you yourself have now pointed out that 
your definition is vulnerable to Searle's pathology (which is even simpler than 
the infinite AIXI effect  :-)
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 3:00 PM
  Subject: Re: [agi] Relationship btw consciousness and intelligence



  Sure, that's fine...

  I mean: I have given a mathematical definition before, so all these verbal 
paraphrases
  should be viewed as rough approximations anyway...


  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Allow me to paraphrase . . . .

Something is intelligent if it is functional over a wide variety of 
complex goals.

Is that a reasonable shot at your definition?
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 2:41 PM 
  Subject: Re: [agi] Relationship btw consciousness and intelligence



  Intelligence, to me, is the ability to achieve complex goals...

  This is one way of being functional  a paperclip though is very 
functional yet not very intelligent...

  ben g



  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: 
 Sure... I prefer to define intelligence in terms of behavioral 
functionality rather than internal properties, but you are free to define it 
differently ;-)

I wouldn't call learning/adaptability an internal(-only) property . . . 
. 

 I note that if the Chinese language changes over time, then the 
{Searle + rulebook} system will rapidly become less intelligent in this context 
 

See.  Now this indicates the funkiness of your definition . . . . 
Replace intelligent with functional and it makes a lot more sense.

Actually, that raises a good question -- What is the difference between 
your intelligent and your functional?
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 2:11 PM 
  Subject: Re: [agi] Relationship btw consciousness and intelligence



  Sure... I prefer to define intelligence in terms of behavioral 
functionality rather than internal properties, but you are free to define it 
differently ;-)

  I note that if the Chinese language changes over time, then the 
{Searle + rulebook} system will rapidly become less intelligent in this context 
 

  ben g


  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: 
I liked most of your points, but . . . . 

 However, Searle's example is pathological in the sense that it 
posits a system with a high degree of intelligence associated with a 
functionality that is NOT associated with any intensity-of-consciousness.  But 
I suggest that this pathology is due to the unrealistically large amount of 
computing resources that the rulebook requires.  

Not by my definition of intelligence (which requires 
learning/adaptation).


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 20, 2007 1:24 PM 
  Subject: [agi] Relationship btw consciousness and intelligence



  Hi all,

  Someone emailed me recently about Searle's Chinese Room argument, 

  http://en.wikipedia.org/wiki/Chinese_room 

  a topic that normally bores me to tears, but it occurred to me 
that part of my reply might be of interest to some 
  on this list, because it pertains to the more general issue of 
the relationship btw consciousness and intelligence.

  It also ties in with the importance of thinking about efficient 
intelligence rather than just raw intelligence, as 
  discussed in the recent thread on definitions of intelligence.

  Here is the relevant part of my reply about Searle:

  
  However, a key point is: The scenario Searle describes is likely 
not physically possible, due to the unrealistically large size of the rulebook. 
 The structures that we associate with intelligence (will, focused awareness, 
etc.) in a human context, all come out of the need to do intelligent processing 
within modest space and time requirements.  

  So when we say we feel like the {Searle+rulebook} system isn't 
really understanding Chinese, what we mean is: It isn't understanding Chinese 
according to the methods we are used to, which are methods adapted to deal with 
modest space and time resources.

  This ties in with the relationship btw intensity-of-consciousness 
and degree-of-intelligence.  In real life, these seem often to be tied 
together, because the cognitive structures that correlate with intensity of 
consciousness are useful ones for achieving intelligent behaviors.

  

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

But I don't see vulnerability to Searle's pathology as a flaw in my
definition of intelligence...

The system {Searle + rulebook} **is** intelligent but not efficiently
intelligent

I conjecture that highly efficiently intelligent systems will necessarily
possess intense consciousness and self-understanding.  (Because I think that
intense consciousness and self-understanding result from certain cognitive
structures and dynamics, that I think are necessary for achieving efficient
intelligence.)

I don't think that high intelligence in principle implies intense
consciousness or self-understanding...

The reason this confuses people is that

intelligence {roughly =} efficient intelligence

for any real systems we have ever seen or know how to construct.  The only
intelligent but not efficiently intelligent systems we can talk about are
hypothetical ones like {Searle+rulebook} or AIXI or AIXItl ...

-- Ben G

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:


 Rough approximations maybe . . . . but you yourself have now pointed out
that your definition is vulnerable to Searle's pathology (which is even
simpler than the infinite AIXI effect  :-)

- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, May 20, 2007 3:00 PM
*Subject:* Re: [agi] Relationship btw consciousness and intelligence


Sure, that's fine...

I mean: I have given a mathematical definition before, so all these verbal
paraphrases
should be viewed as rough approximations anyway...

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:

  Allow me to paraphrase . . . .

 Something is intelligent if it is functional over a wide variety of
 complex goals.

 Is that a reasonable shot at your definition?

 - Original Message -
 *From:* Benjamin Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
  *Sent:* Sunday, May 20, 2007 2:41 PM
 *Subject:* Re: [agi] Relationship btw consciousness and intelligence


 Intelligence, to me, is the ability to achieve complex goals...

 This is one way of being functional  a paperclip though is very
 functional yet not very intelligent...

 ben g


 On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 
Sure... I prefer to define intelligence in terms of behavioral
  functionality rather than internal properties, but you are free to define it
  differently ;-)
 
  I wouldn't call learning/adaptability an internal(-only) property . .
  . .
 
   I note that if the Chinese language changes over time, then the
  {Searle + rulebook} system will rapidly become less intelligent in this
  context 
  See.  Now this indicates the funkiness of your definition . . . .
  Replace intelligent with functional and it makes a lot more sense.
 
  Actually, that raises a good question -- What is the difference
  between your intelligent and your functional?
 
  - Original Message -
  *From:* Benjamin Goertzel [EMAIL PROTECTED]
  *To:* agi@v2.listbox.com
   *Sent:* Sunday, May 20, 2007 2:11 PM
  *Subject:* Re: [agi] Relationship btw consciousness and intelligence
 
 
  Sure... I prefer to define intelligence in terms of behavioral
  functionality rather than internal properties, but you are free to define it
  differently ;-)
 
  I note that if the Chinese language changes over time, then the
  {Searle + rulebook} system will rapidly become less intelligent in this
  context 
 
  ben g
 
   On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 
I liked most of your points, but . . . .
  
However, Searle's example is pathological in the sense that it
   posits *a system with a high degree of intelligence* associated with
   a functionality that is NOT associated with any 
intensity-of-consciousness.
   But I suggest that this pathology is due to the unrealistically large 
amount
   of computing resources that the rulebook requires.
  
   Not by my definition of intelligence (which requires
   learning/adaptation).
  
- Original Message -
   *From:* Benjamin Goertzel [EMAIL PROTECTED]
   *To:* agi@v2.listbox.com
   *Sent:* Sunday, May 20, 2007 1:24 PM
   *Subject:* [agi] Relationship btw consciousness and intelligence
  
  
   Hi all,
  
   Someone emailed me recently about Searle's Chinese Room argument,
  
   http://en.wikipedia.org/wiki/Chinese_room
  
   a topic that normally bores me to tears, but it occurred to me that
   part of my reply might be of interest to some
   on this list, because it pertains to the more general issue of the
   relationship btw consciousness and intelligence.
  
   It also ties in with the importance of thinking about efficient
   intelligence rather than just raw intelligence, as
   discussed in the recent thread on definitions of intelligence.
  
   Here is the relevant part of my reply about Searle:
  
   
   However, a key point is: The scenario Searle describes is likely not
   physically possible, due to the unrealistically large size of the 
rulebook.
   The structures that we associate with 

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Jef Allbright

On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Personally, I find many of his posts highly entertaining...

If your sense of humor differs, you can always use the DEL key ;-)

-- Ben G


I initially found it sad and disturbing, no, disturbed.

Thanks to Mark I was able to see the humor in it and I've created the
appropriate filter to direct future such posts to my humor file.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Richard Loosemore
Actually, I think this a mistake, because it misses the core reason why 
Searle's argument is wrong, and repeats the mistake that he made.


(I think, btw, that this kind of situation, where people come up with 
reasons against the CR arument that are not actually applicable or 
relevant, is one of the reasons for the CR argument's longevity.  What I 
mean is:  I think you are in good company here, because so many people 
have come up with so many of these sorts of arguments).


The core reason for the failure of the CR is that it posits a situation 
in which an intelligence is implemented on top of another intelligence: 
 then Searle makes an appeal to our feelings about the consciousness 
feelings of the wrong consciousness in this duo (the low level one). 
Can't do that:  the consciousness of the top level intelligence is the 
only one that is relevant.  Of course, the problem is that such a 
situation (one intelligence on top of another) is an exceptional case 
that one cannot make intuitive appeals about.  Searle can scream all 
he wants that it makes no sense that there could be two intelligences 
here, but that just means he is ignorant about what intelligence is:  it 
is not my job to fix Searle's ignorance.


The reason your argument is a mistake is that it also makes reference to 
the conscious awareness of the low-level intelligence (at least, that is 
what it appears to be doing).  As such, you are talking about the wrong 
intelligence, so your remarks are not relevant.


Meta comment:  I too find the CR deeply boring, but alas, you brought it 
up, so I had to say something ;-)




Richard Loosemore.



Benjamin Goertzel wrote:


Hi all,

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a topic that normally bores me to tears, but it occurred to me that part 
of my reply might be of interest to some
on this list, because it pertains to the more general issue of the 
relationship btw consciousness and intelligence.


It also ties in with the importance of thinking about efficient 
intelligence rather than just raw intelligence, as

discussed in the recent thread on definitions of intelligence.

Here is the relevant part of my reply about Searle:


However, a key point is: The scenario Searle describes is likely not 
physically possible, due to the unrealistically large size of the 
rulebook.  The structures that we associate with intelligence (will, 
focused awareness, etc.) in a human context, all come out of the need to 
do intelligent processing within modest space and time requirements. 

So when we say we feel like the {Searle+rulebook} system isn't really 
understanding Chinese, what we mean is: It isn't understanding Chinese 
according to the methods we are used to, which are methods adapted to 
deal with modest space and time resources.


This ties in with the relationship btw intensity-of-consciousness and 
degree-of-intelligence.  In real life, these seem often to be tied 
together, because the cognitive structures that correlate with intensity 
of consciousness are useful ones for achieving intelligent behaviors.


However, Searle's example is pathological in the sense that it posits a 
system with a high degree of intelligence associated with a 
functionality that is NOT associated with any 
intensity-of-consciousness.  But I suggest that this pathology is due to 
the unrealistically large amount of computing resources that the 
rulebook requires. 

I.e., it is finitude of resources that causes intelligence and 
intensity-of-consciousness to be correlated.  The fact that this 
correlation breaks in a pathological, physically-impossible case that 
requires dramatically much resources, doesn't mean too much...



Note that I write about intensity of consciousness rather than presence 
of consciousness.  I tend toward panpsychism but I do accept that while 
all animals are conscious, some animals are more conscious than others 
(to pervert Orwell).  I have elaborated on this perspective considerably 
in The Hidden Pattern.


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 But what I've been thinking and this is probably just reiterating what
 someone else has worked through but basically a large part of intelligence
 is chaos control, chaos feedback loops, operating within complexity.
 Intelligence is some sort of delicate multi-vectored balancing act between
 complexity and projecting, manipulating, storing/modeling, NN training,
 genetic learning of the chaos and applying chaos in an environment and
 optimizing it's understanding and application of.  The more intelligent, the
 better handle an entity has on the chaos.  An intelligent entity can have
 maximal effect with minimal energy expenditure on its environment in a
 controlled manner; intelligence (or the application of) or even perhaps
 consciousness is the real-time surfing of buttery effects.

I think the ability to model a chaotic process depends not so much on
intelligence (whatever that is) as it does on knowledge of the state of the
environment.  For example, a chaotic process such as x := 4x(1 - x) has a
really simple model.  Your ability to predict x after 1000 iterations depends
only on knowing the current value of x to several hundred decimal places.  It
is this type of knowledge that limits our ability to predict (and therefore
control) the weather.

I think there is a different role for chaos theory.  Richard Loosemore
describes a system as intelligent if it is complex and adaptive.  Shane Legg's
definition of universal intelligence requires (I believe) complexity but not
adaptability.  From a practical perspective I don't think it matters because
we don't know how to build useful, complex systems that are not adaptive.  For
example, large software projects (code + human programmers) are adaptive in
the sense that you can make incremental changes to the code without completely
breaking the system, just as we incrementally update DNA or neural
connections.

One counterexample is a mathematical description of a cryptographic system. 
Any change to the system renders any prior analysis of its security invalid. 
Such systems are necessarily brittle.  Out of necessity, we build systems that
have mathematical descriptions simple enough to analyze.

Stuart Kaufmann [1] noted that complex systems such as DNA tend to evolve to
the boundary between stability and chaos, e.g. a Lyapunov exponent near 1 (or
its approximation in discrete systems).  I believe this is because overly
stable systems aren't very complex (can't solve hard problems) and overly
chaotic systems aren't adaptive (too brittle).

[1] Kauffman, Stuart A., “Antichaos and Adaptation”, Scientific American, Aug.
1991, p. 64.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Richard Loosemore

Matt Mahoney wrote:


I think there is a different role for chaos theory.  Richard Loosemore
describes a system as intelligent if it is complex and adaptive.



NO, no no no no!

I already denied this.

Misunderstanding:  I do not say that a system as intelligent if it is 
complex and adaptive.


Complex Adaptive System is a near-synonym for complex system, that's 
all.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
 
  I think there is a different role for chaos theory.  Richard Loosemore
  describes a system as intelligent if it is complex and adaptive.
 
 
 NO, no no no no!
 
 I already denied this.
 
 Misunderstanding:  I do not say that a system as intelligent if it is 
 complex and adaptive.
 
 Complex Adaptive System is a near-synonym for complex system, that's 
 all.

OK, so what is your definition of intelligence?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.

Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved enlightenment.  What is going on there?  Verses and
ant who operates in a lower level of consciousness, and then the average Joe
who is in a different level of consciousness.  Could it be that they are
existing in different orbits or sweet spots/equilibria regions within a
spectrum of environmental chaotic relationships?  Can the enlightened Buddha
have vast awareness as seeing cause and effect/butterfly effect as small
distances verses the ant who can't see the distance between most
cause/effects, only the very tiny ones?  An AGI could run in different
orbits/levels, and then this would allow for AGI's with really high levels
of consciousness with tiny knowledge bases or vice versa.  A Google for
example is a massive KB running in a very low orbit.  There are probably
limits to the highest levels/orbits of consciousness.  Also the orbits may
induce some sort of brittleness for entities running within them if the
entity is forced to and can't adapt to running outside of their home
orbit...

Just some thoughts.

John

 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 I think the ability to model a chaotic process depends not so much on
 intelligence (whatever that is) as it does on knowledge of the state
 of the
 environment.  For example, a chaotic process such as x := 4x(1 - x) has
 a
 really simple model.  Your ability to predict x after 1000 iterations
 depends
 only on knowing the current value of x to several hundred decimal
 places.  It
 is this type of knowledge that limits our ability to predict (and
 therefore
 control) the weather.
 
 I think there is a different role for chaos theory.  Richard Loosemore
 describes a system as intelligent if it is complex and adaptive.  Shane
 Legg's
 definition of universal intelligence requires (I believe) complexity but
 not
 adaptability.  From a practical perspective I don't think it matters
 because
 we don't know how to build useful, complex systems that are not
 adaptive.  For
 example, large software projects (code + human programmers) are adaptive
 in
 the sense that you can make incremental changes to the code without
 completely
 breaking the system, just as we incrementally update DNA or neural
 connections.
 
 One counterexample is a mathematical description of a cryptographic
 system.
 Any change to the system renders any prior analysis of its security
 invalid.
 Such systems are necessarily brittle.  Out of necessity, we build
 systems that
 have mathematical descriptions simple enough to analyze.
 
 Stuart Kaufmann [1] noted that complex systems such as DNA tend to
 evolve to
 the boundary between stability and chaos, e.g. a Lyapunov exponent near
 1 (or
 its approximation in discrete systems).  I believe this is because
 overly
 stable systems aren't very complex (can't solve hard problems) and
 overly
 chaotic systems aren't adaptive (too brittle).
 
 [1] Kauffman, Stuart A., Antichaos and Adaptation, Scientific
 American, Aug.
 1991, p. 64.
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang

Ben,

Let me try to be mathematical and behavioral, too.

Assume we finally agree on a way to measure a system's problem-solving
capability (over a wide variety of complex goals) with a numerical
function F(t), with t as the time of the measurement. The system's
resources cost is also measured by a numerical function C(t).

You and Shane believe that the value of F(t) is also a measurement of
intelligence.

Furthermore, you suggest efficient intelligence to be F(t)/C(t), and
arguing that it is more realistic and relevant than raw
intelligence. You also think my definition of intelligence is roughly
the same.

But to me, in this situation intelligence is better measured by
F'(t), that is, the derivative of the capability, or how much the
capability of the system can change (usually increase), under a
constant resources supply. I believe it is also close to what Mark
said.

All these three measurement makes sense and are related to the
everyday meaning of the word intelligence, though they are very
different. For a system without adaptation ability, both F(t) and
F(t)/C(t) can be large, but F'(t) is zero --- this is conventional
computer systems, in my mind. On the other hand, systems with large
F'(t) have great potentials, though initially may not have much
problem-solving capability --- this is AI systems, according to my
definition.

For practical applications, we surely want systems with both large
F(t)/C(t) and large F'(t), and system with huge F(t) at the cost of a
huge C(t), like AIXI, is unrealistic --- we all agree here, including
Shane, so it is not the issue. The issue is: F(t)/C(t) and F'(t) are
different (though not the opposite of each other).

Pei


On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Sure, that's fine...

I mean: I have given a mathematical definition before, so all these verbal
paraphrases
should be viewed as rough approximations anyway...


On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:



 Allow me to paraphrase . . . .

 Something is intelligent if it is functional over a wide variety of
complex goals.

 Is that a reasonable shot at your definition?

 - Original Message -
 From: Benjamin Goertzel
 To: agi@v2.listbox.com

 Sent: Sunday, May 20, 2007 2:41 PM
 Subject: Re: [agi] Relationship btw consciousness and intelligence


 Intelligence, to me, is the ability to achieve complex goals...

 This is one way of being functional  a paperclip though is very
functional yet not very intelligent...

 ben g



 On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 
 
   Sure... I prefer to define intelligence in terms of behavioral
functionality rather than internal properties, but you are free to define it
differently ;-)
 
  I wouldn't call learning/adaptability an internal(-only) property . . .
.
 
   I note that if the Chinese language changes over time, then the
{Searle + rulebook} system will rapidly become less intelligent in this
context 
 
  See.  Now this indicates the funkiness of your definition . . . .
Replace intelligent with functional and it makes a lot more sense.
 
  Actually, that raises a good question -- What is the difference between
your intelligent and your functional?
 
  - Original Message -
  From: Benjamin Goertzel
  To: agi@v2.listbox.com
 
  Sent: Sunday, May 20, 2007 2:11 PM
  Subject: Re: [agi] Relationship btw consciousness and intelligence
 
 
  Sure... I prefer to define intelligence in terms of behavioral
functionality rather than internal properties, but you are free to define it
differently ;-)
 
  I note that if the Chinese language changes over time, then the {Searle
+ rulebook} system will rapidly become less intelligent in this context 
 
  ben g
 
 
 
  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
  
  
  
   I liked most of your points, but . . . .
  
However, Searle's example is pathological in the sense that it
posits a system with a high degree of intelligence associated with a
functionality that is NOT associated with any intensity-of-consciousness.
But I suggest that this pathology is due to the unrealistically large amount
of computing resources that the rulebook requires.
  
   Not by my definition of intelligence (which requires
learning/adaptation).
  
  
  
  
   - Original Message -
   From: Benjamin Goertzel
   To: agi@v2.listbox.com
   Sent: Sunday, May 20, 2007 1:24 PM
   Subject: [agi] Relationship btw consciousness and intelligence
  
  
   Hi all,
  
   Someone emailed me recently about Searle's Chinese Room argument,
  
   http://en.wikipedia.org/wiki/Chinese_room
  
   a topic that normally bores me to tears, but it occurred to me that
part of my reply might be of interest to some
   on this list, because it pertains to the more general issue of the
relationship btw consciousness and intelligence.
  
   It also ties in with the importance of thinking about efficient
intelligence rather than just raw intelligence, as
   discussed in the recent thread on definitions of 

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:


 Seems to me like you're going through *a lot* of effort for the same
effect + a lot of confusion

You conjecture that highly efficiently intelligent systems will
necessarily possess intense consciousness and self-understanding.

Isn't possess intense consciousness and self-understanding exactly the
same as learn?




They are not the same thing, although both are apparently necessary to
achieve efficient intelligence

So aren't you just saying that highly efficiently intelligent systems will

necessarily learn?

And why don't we just simplify highly efficiently intelligent as
intelligent -- and just, by fiat, declare that anything that isn't highly
efficiently intelligent is merely (at best) reflexively functional.




You're just expressing a different taste in mapping formal definitions onto
English phrases.

I really don't think it matters what mapping you choose, so long as the
mapping is defined clearly...

However, I am coming to the opinion that mapping any formal definition into
the NL term intelligence is a political error...


From now on maybe I will use


raw intelligence = complexity of goals achievable

efficient intelligence = sum of (goal complexity)/(resources required to
achieve goal)

and not try to attach any single formal definition to the obviously highly
ambiguous
NL term intelligence ..

-- Ben G

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

 Well I'm going into conjecture area because my technical knowledge of some
 of these disciplines is weak, but I'll keep going just for grins.
 
 Take an example of an entity existing in a higher level of consciousness - a
 Buddha who has achieved enlightenment.  What is going on there?  Verses and
 ant who operates in a lower level of consciousness, and then the average Joe
 who is in a different level of consciousness.  Could it be that they are
 existing in different orbits or sweet spots/equilibria regions within a
 spectrum of environmental chaotic relationships?  Can the enlightened Buddha
 have vast awareness as seeing cause and effect/butterfly effect as small
 distances verses the ant who can't see the distance between most
 cause/effects, only the very tiny ones?  An AGI could run in different
 orbits/levels, and then this would allow for AGI's with really high levels
 of consciousness with tiny knowledge bases or vice versa.  A Google for
 example is a massive KB running in a very low orbit.  There are probably
 limits to the highest levels/orbits of consciousness.  Also the orbits may
 induce some sort of brittleness for entities running within them if the
 entity is forced to and can't adapt to running outside of their home
 orbit...

I thought that Buddhist enlightenment meant realizing that seeking earthly
pleasures (short term goals) is counterproductive to the longer term goal of
happiness through enlightenment.  Thus, the Buddha is more intelligent, if you
measure intelligence by the ability to achieve goals.  (But, being
unenlightened, I could be wrong).

But I don't see how you can measure the intelligence or consciousness of an
attractor in a dynamical system.

Also, I don't believe that consciousness is something that can be detected or
measured.  It is not a requirement for intelligence.  What humans actually
have is a belief in their own consciousness.  It is part of your motivational
system and cannot be changed.  You could not disbelieve in your own
consciousness or free will, even if you wanted to.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

On 5/20/07, Pei Wang [EMAIL PROTECTED] wrote:


OK, it sounds much better than your previous descriptions to me
(though there are still issues which I'd rather not discuss now).




Much of our disagreement seems just to be about what goes in the def'n
of intelligence and what goes in theorems about the properties required
by intelligence.  Which then largely becomes a matter of taste.


But how about systems that cannot learn at all but have strong

built-in capability and efficiency (within certain domains)? Will you
say that they are intelligent but not too much, or not intelligent at
all?



I would say that they do have intelligence.

But I would conjecture that there are strict limits to how much efficient
intelligence such systems can have.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel

Adding onto the catalogue of specific sub-concepts of intelligence, we can
identify not only

raw intelligence = goal-achieving power

efficient intelligence = goal-achieving power per unit of computational
resources

adaptive intelligence = ability to achieve goals newly presented to the
system, not known to the system or its creators at the time of its creation
[the wording could probably be improved]

Shane wants to define intelligence as what I here call raw intelligence

Pei and Mark Waser want to define intelligence as what I here call adaptive
intelligence

What is interesting to me is not which one of these various people want to
identify with the NL term intelligence, but rather the relationships
between the different types of intelligence.

For example, many of us seem to support the conjecture that adaptive
intelligence is necessary for efficient intelligence

-- Ben G



On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:




On 5/20/07, Pei Wang [EMAIL PROTECTED] wrote:

 OK, it sounds much better than your previous descriptions to me
 (though there are still issues which I'd rather not discuss now).



Much of our disagreement seems just to be about what goes in the def'n
of intelligence and what goes in theorems about the properties required
by intelligence.  Which then largely becomes a matter of taste.


But how about systems that cannot learn at all but have strong
 built-in capability and efficiency (within certain domains)? Will you
 say that they are intelligent but not too much, or not intelligent at
 all?


I would say that they do have intelligence.

But I would conjecture that there are strict limits to how much efficient
intelligence such systems can have.

-- Ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel



The reason your argument is a mistake is that it also makes reference to
the conscious awareness of the low-level intelligence (at least, that is
what it appears to be doing).  As such, you are talking about the wrong
intelligence, so your remarks are not relevant.



I didn't mean to be doing that.

Of course Searle, in the parable, has intensive consciousness, as well as
efficient and adaptive intelligence

But the knowledge of Chinese is immanent in the system {Searle+rulebook},
which does not have intensive consciousness, and in the context of knowing
Chinese, has only raw intelligence but neither efficient nor highly adaptive
intelligence...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang

On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Much of our disagreement seems just to be about what goes in the def'n
of intelligence and what goes in theorems about the properties required
by intelligence.  Which then largely becomes a matter of taste.


Part of them, yes, but not all --- our differences usually turn out to
be larger than your estimation, though smaller than mine. :)


 But how about systems that cannot learn at all but have strong
 built-in capability and efficiency (within certain domains)? Will you
 say that they are intelligent but not too much, or not intelligent at
 all?

I would say that they do have intelligence.


You see, we do disagree here. To me, system that don't learn at all or
assume infinite resources have zero intelligence, though they can be
useful for other purposes.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang

On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Adding onto the catalogue of specific sub-concepts of intelligence, we can
identify not only

raw intelligence = goal-achieving power

efficient intelligence = goal-achieving power per unit of computational
resources

adaptive intelligence = ability to achieve goals newly presented to the
system, not known to the system or its creators at the time of its creation
[the wording could probably be improved]


Again, it sounds much better, though adaptive intelligence sounds
redundant to me.


What is interesting to me is not which one of these various people want to
identify with the NL term intelligence, but rather the relationships
between the different types of intelligence.


Agree. At least people should recognize them as different, and stop
using one standard to evaluate another.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang

OK, it sounds much better than your previous descriptions to me
(though there are still issues which I'd rather not discuss now).

But how about systems that cannot learn at all but have strong
built-in capability and efficiency (within certain domains)? Will you
say that they are intelligent but not too much, or not intelligent at
all?

Pei

On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Actually, rather than F(t) we need to conceive capability as F(I)
where I is a time interval...

Then F(I) is the ability of the system to achieve complex goals
over the interval I

If I is long enough, then this encompasses the system's capability
to learn to achieve new goals based on instruction, during the
time interval I

So I would say that having a high F'(I) over short time intervals I,
is the right way to have a high (F/C)(I) over long time intervals I

This basically is the conjecture that **learning is the path to
efficient intelligence**

-- Ben


On 5/20/07, Pei Wang  [EMAIL PROTECTED] wrote:

 Ben,

 Let me try to be mathematical and behavioral, too.

 Assume we finally agree on a way to measure a system's problem-solving
 capability (over a wide variety of complex goals) with a numerical
 function F(t), with t as the time of the measurement. The system's
 resources cost is also measured by a numerical function C(t).

 You and Shane believe that the value of F(t) is also a measurement of
 intelligence.

 Furthermore, you suggest efficient intelligence to be F(t)/C(t), and
 arguing that it is more realistic and relevant than raw
 intelligence. You also think my definition of intelligence is roughly
 the same.

 But to me, in this situation intelligence is better measured by
 F'(t), that is, the derivative of the capability, or how much the
 capability of the system can change (usually increase), under a
 constant resources supply. I believe it is also close to what Mark
 said.

 All these three measurement makes sense and are related to the
 everyday meaning of the word intelligence, though they are very
 different. For a system without adaptation ability, both F(t) and
 F(t)/C(t) can be large, but F'(t) is zero --- this is conventional
 computer systems, in my mind. On the other hand, systems with large
 F'(t) have great potentials, though initially may not have much
 problem-solving capability --- this is AI systems, according to my
 definition.

 For practical applications, we surely want systems with both large
 F(t)/C(t) and large F'(t), and system with huge F(t) at the cost of a
 huge C(t), like AIXI, is unrealistic --- we all agree here, including
 Shane, so it is not the issue. The issue is: F(t)/C(t) and F'(t) are
 different (though not the opposite of each other).

 Pei


 On 5/20/07, Benjamin Goertzel  [EMAIL PROTECTED] wrote:
 
  Sure, that's fine...
 
  I mean: I have given a mathematical definition before, so all these
verbal
  paraphrases
  should be viewed as rough approximations anyway...
 
 
  On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
  
  
  
   Allow me to paraphrase . . . .
  
   Something is intelligent if it is functional over a wide variety
of
  complex goals.
  
   Is that a reasonable shot at your definition?
  
   - Original Message -
   From: Benjamin Goertzel
   To: agi@v2.listbox.com
  
   Sent: Sunday, May 20, 2007 2:41 PM
   Subject: Re: [agi] Relationship btw consciousness and intelligence
  
  
   Intelligence, to me, is the ability to achieve complex goals...
  
   This is one way of being functional  a paperclip though is very
  functional yet not very intelligent...
  
   ben g
  
  
  
   On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
   
   
 Sure... I prefer to define intelligence in terms of behavioral
  functionality rather than internal properties, but you are free to
define it
  differently ;-)
   
I wouldn't call learning/adaptability an internal(-only) property .
. .
  .
   
 I note that if the Chinese language changes over time, then the
  {Searle + rulebook} system will rapidly become less intelligent in this
  context 
   
See.  Now this indicates the funkiness of your definition . . . .
  Replace intelligent with functional and it makes a lot more sense.
   
Actually, that raises a good question -- What is the difference
between
  your intelligent and your functional?
   
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
   
Sent: Sunday, May 20, 2007 2:11 PM
Subject: Re: [agi] Relationship btw consciousness and intelligence
   
   
Sure... I prefer to define intelligence in terms of behavioral
  functionality rather than internal properties, but you are free to
define it
  differently ;-)
   
I note that if the Chinese language changes over time, then the
{Searle
  + rulebook} system will rapidly become less intelligent in this context

   
ben g
   
   
   
On 5/20/07, Mark Waser  [EMAIL PROTECTED] wrote:



 I liked