Re: [agi] Pure reason is a disease.

2007-05-20 Thread Jiri Jelinek
Hi Mark, AGI(s) suggest solutions people decide what to do. 1. People are stupid and will often decide to do things that will kill large numbers of people. I wonder how vague are the rules used by major publishers to decide what is OK to publish. I'm proposing a layered defense

[agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread A. T. Murray
University graduate students in computer science, linguistics, psychology, neuroscience and so on need a suitable topic for that scholarly contribution known as a Ph.D. dissertation. The SourceForge Mind project in artificial intelligence, on the other hand, needs entree into the academic AI

Re: [agi] Pure reason is a disease.

2007-05-20 Thread Mark Waser
I wonder how vague are the rules used by major publishers to decide what is OK to publish. Generally, there are no rules -- it's normally just the best judgment of a single individual. Can you get more specific about the layers? How do you detect malevolent individuals? Note that the fact

[agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
Hi all, Someone emailed me recently about Searle's Chinese Room argument, http://en.wikipedia.org/wiki/Chinese_room a topic that normally bores me to tears, but it occurred to me that part of my reply might be of interest to some on this list, because it pertains to the more general issue of

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
I'm probably not answering your question but have been thinking more on all this. There's the usual thermodynamics stuff and relativistic physics that is going on with intelligence and flipping bits within this universe, verses the no-friction universe or Newtonian setup. But what I've been

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
I liked most of your points, but . . . . However, Searle's example is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality that is NOT associated with any intensity-of-consciousness. But I suggest that this pathology is due

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Oops heh I was eating French toast as I wrote this - intelligence (or the application of) or even perhaps consciousness is the real-time surfing of buttery effects I meant butterfly effects. John -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Sunday, May 20,

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
Sure... I prefer to define intelligence in terms of behavioral functionality rather than internal properties, but you are free to define it differently ;-) I note that if the Chinese language changes over time, then the {Searle + rulebook} system will rapidly become less intelligent in this

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Eliezer S. Yudkowsky
Why is Murray allowed to remain on this mailing list, anyway? As a warning to others? The others don't appear to be taking the hint. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Benjamin Goertzel
Personally, I find many of his posts highly entertaining... If your sense of humor differs, you can always use the DEL key ;-) -- Ben G On 5/20/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Why is Murray allowed to remain on this mailing list, anyway? As a warning to others? The others

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
Intelligence, to me, is the ability to achieve complex goals... This is one way of being functional a paperclip though is very functional yet not very intelligent... ben g On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: Sure... I prefer to define intelligence in terms of behavioral

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
Allow me to paraphrase . . . . Something is intelligent if it is functional over a wide variety of complex goals. Is that a reasonable shot at your definition? - Original Message - From: Benjamin Goertzel To: agi@v2.listbox.com Sent: Sunday, May 20, 2007 2:41 PM

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
Sure, that's fine... I mean: I have given a mathematical definition before, so all these verbal paraphrases should be viewed as rough approximations anyway... On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: Allow me to paraphrase . . . . Something is intelligent if it is functional over

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Mark Waser
Rough approximations maybe . . . . but you yourself have now pointed out that your definition is vulnerable to Searle's pathology (which is even simpler than the infinite AIXI effect :-) - Original Message - From: Benjamin Goertzel To: agi@v2.listbox.com Sent: Sunday, May 20,

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
But I don't see vulnerability to Searle's pathology as a flaw in my definition of intelligence... The system {Searle + rulebook} **is** intelligent but not efficiently intelligent I conjecture that highly efficiently intelligent systems will necessarily possess intense consciousness and

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Jef Allbright
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Personally, I find many of his posts highly entertaining... If your sense of humor differs, you can always use the DEL key ;-) -- Ben G I initially found it sad and disturbing, no, disturbed. Thanks to Mark I was able to see the humor

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Richard Loosemore
Actually, I think this a mistake, because it misses the core reason why Searle's argument is wrong, and repeats the mistake that he made. (I think, btw, that this kind of situation, where people come up with reasons against the CR arument that are not actually applicable or relevant, is one

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote: But what I've been thinking and this is probably just reiterating what someone else has worked through but basically a large part of intelligence is chaos control, chaos feedback loops, operating within complexity. Intelligence is some sort of delicate

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Richard Loosemore
Matt Mahoney wrote: I think there is a different role for chaos theory. Richard Loosemore describes a system as intelligent if it is complex and adaptive. NO, no no no no! I already denied this. Misunderstanding: I do not say that a system as intelligent if it is complex and adaptive.

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: I think there is a different role for chaos theory. Richard Loosemore describes a system as intelligent if it is complex and adaptive. NO, no no no no! I already denied this. Misunderstanding: I do not say

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Well I'm going into conjecture area because my technical knowledge of some of these disciplines is weak, but I'll keep going just for grins. Take an example of an entity existing in a higher level of consciousness - a Buddha who has achieved enlightenment. What is going on there? Verses and ant

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang
Ben, Let me try to be mathematical and behavioral, too. Assume we finally agree on a way to measure a system's problem-solving capability (over a wide variety of complex goals) with a numerical function F(t), with t as the time of the measurement. The system's resources cost is also measured by

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote: Seems to me like you're going through *a lot* of effort for the same effect + a lot of confusion You conjecture that highly efficiently intelligent systems will necessarily possess intense consciousness and self-understanding. Isn't possess

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote: Well I'm going into conjecture area because my technical knowledge of some of these disciplines is weak, but I'll keep going just for grins. Take an example of an entity existing in a higher level of consciousness - a Buddha who has achieved

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
On 5/20/07, Pei Wang [EMAIL PROTECTED] wrote: OK, it sounds much better than your previous descriptions to me (though there are still issues which I'd rather not discuss now). Much of our disagreement seems just to be about what goes in the def'n of intelligence and what goes in theorems

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
Adding onto the catalogue of specific sub-concepts of intelligence, we can identify not only raw intelligence = goal-achieving power efficient intelligence = goal-achieving power per unit of computational resources adaptive intelligence = ability to achieve goals newly presented to the system,

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Benjamin Goertzel
The reason your argument is a mistake is that it also makes reference to the conscious awareness of the low-level intelligence (at least, that is what it appears to be doing). As such, you are talking about the wrong intelligence, so your remarks are not relevant. I didn't mean to be doing

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Much of our disagreement seems just to be about what goes in the def'n of intelligence and what goes in theorems about the properties required by intelligence. Which then largely becomes a matter of taste. Part of them, yes, but not all

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Adding onto the catalogue of specific sub-concepts of intelligence, we can identify not only raw intelligence = goal-achieving power efficient intelligence = goal-achieving power per unit of computational resources adaptive intelligence

Re: [agi] Relationship btw consciousness and intelligence

2007-05-20 Thread Pei Wang
OK, it sounds much better than your previous descriptions to me (though there are still issues which I'd rather not discuss now). But how about systems that cannot learn at all but have strong built-in capability and efficiency (within certain domains)? Will you say that they are intelligent but