Hi Mark,
AGI(s) suggest solutions people decide what to do.
1. People are stupid and will often decide to do things that will kill
large numbers of people.
I wonder how vague are the rules used by major publishers to decide
what is OK to publish.
I'm proposing a layered defense
University graduate students in computer science, linguistics,
psychology, neuroscience and so on need a suitable topic for
that scholarly contribution known as a Ph.D. dissertation.
The SourceForge Mind project in artificial intelligence,
on the other hand, needs entree into the academic AI
I wonder how vague are the rules used by major publishers to decide
what is OK to publish.
Generally, there are no rules -- it's normally just the best judgment of a
single individual.
Can you get more specific about the layers? How do you detect
malevolent individuals? Note that the fact
Hi all,
Someone emailed me recently about Searle's Chinese Room argument,
http://en.wikipedia.org/wiki/Chinese_room
a topic that normally bores me to tears, but it occurred to me that part of
my reply might be of interest to some
on this list, because it pertains to the more general issue of
I'm probably not answering your question but have been thinking more on all
this.
There's the usual thermodynamics stuff and relativistic physics that is
going on with intelligence and flipping bits within this universe, verses
the no-friction universe or Newtonian setup.
But what I've been
I liked most of your points, but . . . .
However, Searle's example is pathological in the sense that it posits a
system with a high degree of intelligence associated with a functionality
that is NOT associated with any intensity-of-consciousness. But I suggest
that this pathology is due
Oops heh I was eating French toast as I wrote this -
intelligence (or the application of) or even perhaps consciousness is the
real-time surfing of buttery effects
I meant butterfly effects.
John
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Sunday, May 20,
Sure... I prefer to define intelligence in terms of behavioral functionality
rather than internal properties, but you are free to define it differently
;-)
I note that if the Chinese language changes over time, then the {Searle +
rulebook} system will rapidly become less intelligent in this
Why is Murray allowed to remain on this mailing list, anyway? As a
warning to others? The others don't appear to be taking the hint.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is
Personally, I find many of his posts highly entertaining...
If your sense of humor differs, you can always use the DEL key ;-)
-- Ben G
On 5/20/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Why is Murray allowed to remain on this mailing list, anyway? As a
warning to others? The others
Intelligence, to me, is the ability to achieve complex goals...
This is one way of being functional a paperclip though is very
functional yet not very intelligent...
ben g
On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Sure... I prefer to define intelligence in terms of behavioral
Allow me to paraphrase . . . .
Something is intelligent if it is functional over a wide variety of complex
goals.
Is that a reasonable shot at your definition?
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Sunday, May 20, 2007 2:41 PM
Sure, that's fine...
I mean: I have given a mathematical definition before, so all these verbal
paraphrases
should be viewed as rough approximations anyway...
On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Allow me to paraphrase . . . .
Something is intelligent if it is functional over
Rough approximations maybe . . . . but you yourself have now pointed out that
your definition is vulnerable to Searle's pathology (which is even simpler than
the infinite AIXI effect :-)
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Sunday, May 20,
But I don't see vulnerability to Searle's pathology as a flaw in my
definition of intelligence...
The system {Searle + rulebook} **is** intelligent but not efficiently
intelligent
I conjecture that highly efficiently intelligent systems will necessarily
possess intense consciousness and
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Personally, I find many of his posts highly entertaining...
If your sense of humor differs, you can always use the DEL key ;-)
-- Ben G
I initially found it sad and disturbing, no, disturbed.
Thanks to Mark I was able to see the humor
Actually, I think this a mistake, because it misses the core reason why
Searle's argument is wrong, and repeats the mistake that he made.
(I think, btw, that this kind of situation, where people come up with
reasons against the CR arument that are not actually applicable or
relevant, is one
--- John G. Rose [EMAIL PROTECTED] wrote:
But what I've been thinking and this is probably just reiterating what
someone else has worked through but basically a large part of intelligence
is chaos control, chaos feedback loops, operating within complexity.
Intelligence is some sort of delicate
Matt Mahoney wrote:
I think there is a different role for chaos theory. Richard Loosemore
describes a system as intelligent if it is complex and adaptive.
NO, no no no no!
I already denied this.
Misunderstanding: I do not say that a system as intelligent if it is
complex and adaptive.
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I think there is a different role for chaos theory. Richard Loosemore
describes a system as intelligent if it is complex and adaptive.
NO, no no no no!
I already denied this.
Misunderstanding: I do not say
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.
Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved enlightenment. What is going on there? Verses and
ant
Ben,
Let me try to be mathematical and behavioral, too.
Assume we finally agree on a way to measure a system's problem-solving
capability (over a wide variety of complex goals) with a numerical
function F(t), with t as the time of the measurement. The system's
resources cost is also measured by
On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Seems to me like you're going through *a lot* of effort for the same
effect + a lot of confusion
You conjecture that highly efficiently intelligent systems will
necessarily possess intense consciousness and self-understanding.
Isn't possess
--- John G. Rose [EMAIL PROTECTED] wrote:
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.
Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved
On 5/20/07, Pei Wang [EMAIL PROTECTED] wrote:
OK, it sounds much better than your previous descriptions to me
(though there are still issues which I'd rather not discuss now).
Much of our disagreement seems just to be about what goes in the def'n
of intelligence and what goes in theorems
Adding onto the catalogue of specific sub-concepts of intelligence, we can
identify not only
raw intelligence = goal-achieving power
efficient intelligence = goal-achieving power per unit of computational
resources
adaptive intelligence = ability to achieve goals newly presented to the
system,
The reason your argument is a mistake is that it also makes reference to
the conscious awareness of the low-level intelligence (at least, that is
what it appears to be doing). As such, you are talking about the wrong
intelligence, so your remarks are not relevant.
I didn't mean to be doing
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Much of our disagreement seems just to be about what goes in the def'n
of intelligence and what goes in theorems about the properties required
by intelligence. Which then largely becomes a matter of taste.
Part of them, yes, but not all
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Adding onto the catalogue of specific sub-concepts of intelligence, we can
identify not only
raw intelligence = goal-achieving power
efficient intelligence = goal-achieving power per unit of computational
resources
adaptive intelligence
OK, it sounds much better than your previous descriptions to me
(though there are still issues which I'd rather not discuss now).
But how about systems that cannot learn at all but have strong
built-in capability and efficiency (within certain domains)? Will you
say that they are intelligent but
30 matches
Mail list logo