On 12/2/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I know a little about network intrusion anomaly detection (it was my
dissertation topic), and yes it is an important lessson.
The reason such anomalies occur is
because when attackers craft exploits, they follow enough of the protocol to
make
On 12/13/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 12/5/06, BillK [EMAIL PROTECTED] wrote:
It is a little annoying that he doesn't mention Damasio at all, when
Damasio has been pushing this same thesis for nearly 20 years, and
even popularized it in Descartes' Error.
(Disclaimer: I didn't
On 12/5/06, BillK [EMAIL PROTECTED] wrote:
The good news is that Minsky appears to be making the book available
online at present on his web site. *Download quick!*
http://web.media.mit.edu/~minsky/
See under publications, chapters 1 to 9.
The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7 8 9
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the
On 12/5/06, BillK [EMAIL PROTECTED] wrote:
Your reasoning is getting surreal.
You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are
PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive
Sent: Tuesday, December 05, 2006 10:05 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).
Sure. Absolutely. I'm perfectly willing to contend
BillK [EMAIL PROTECTED] wrote: On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of
: Tuesday, December 05, 2006 11:17 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
BillK [EMAIL PROTECTED] wrote:
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind
Mark Waser [EMAIL PROTECTED] wrote: Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).
Sure. Absolutely. I'm perfectly willing to contend that it takes
intelligence to come up with excuses and that more
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED] wrote:
Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).
Sure. Absolutely. I'm perfectly willing to contend
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 11:34AM
Subject: Re: [agi] A question on thesymbol-system hypothesis
Mark Waser [EMAIL PROTECTED] wrote: Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might
On 12/5/06, Richard Loosemore wrote:
There are so few people who speak up against the conventional attitude
to the [rational AI/irrational humans] idea, it is such a relief to hear
any of them speak out.
I don't know yet if I buy everything Minsky says, but I know I agree
with the spirit of
BillK wrote:
...
Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best
On 12/5/06, Charles D Hixson wrote:
BillK wrote:
...
Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a
BillK wrote:
On 12/5/06, Charles D Hixson wrote:
BillK wrote:
...
No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh
--- Ben Goertzel [EMAIL PROTECTED] wrote:
Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers
based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.
Agreed...
I believe this is
-
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 5:17 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
--- Mark Waser [EMAIL PROTECTED] wrote:
A nice story but it proves absolutely nothing . . . . .
I know a little about
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans. You argued that he
could
have understood it if he tried harder.
No, I gave five separate alternatives most of
am becoming
more and more aware of how much feature extraction and isolation is critical
to my view of AGI.
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 11:30 PM
Subject: Re: Re: [agi] A question on the symbol
Hi,
The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a
Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 11:21 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Hi,
The only real case where a human couldn't understand the machine's
reasoning
in a case like this is where there are so many entangled
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(by you)?
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS
One of Nietzsche's many nice quotes is
: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz gave an example of an intrusion detection system that
learned
information that was not comprehensible to humans. You argued that he
= no intelligence).
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 12:17 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
We're reaching the point of agreeing to disagree except . . . .
Are you really
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS
I take your point with important caveats (that you allude to). Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the
out to
be a *very* severe problem for non-massively parallel systems
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:00 PM
Subject: Re: Re: Re: Re: Re: [agi] A question on the symbol-system
hypothesis
Well
But I'm not at all sure how important that difference is . . . . With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in compiling knowledge (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote:
This sounds very Searlian. The only test you seem to be referring to
is the Chinese Room test.
You misunderstand. The test is being able to form cognitive structures that
can serve as the basis for later more complicated cognitive structures.
it means much less what it's
implications are . . . .
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:
A nice story but it proves absolutely nothing . . . . .
It proves to me that there is no point in continuing this debate.
Further, and more importantly, the pattern matcher *doesn't* understand it's
results either and certainly could build upon
: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
...
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
this.
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 9:17 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:
A nice story but it proves absolutely nothing
PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 5:17 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
--- Mark Waser [EMAIL PROTECTED] wrote:
A nice story but it proves absolutely nothing . . . . .
I know a little about network intrusion anomaly
Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.
Agreed...
I believe this is the fundamental
flaw of all AI systems based on
] A question on the symbol-system hypothesis
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems, however, the representation is more
vector-like
and *not* conducive to easy translation to human terms. I have two
answers
to these cases. Answer 1 is that it is still easy
On 12/2/06, Mark Waser wrote:
My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.
Further, and more importantly, the pattern matcher *doesn't* understand it's
results either and certainly could build upon them -- thus, it
bridge isn't going to hold up near a black hole, but it is
certainly sufficient for near-human conditions.
Mark
- Original Message -
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system
behavior of such.
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, December 01, 2006 7:02 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems, however, the representation is more vector-like
and *not* conducive to easy translation to human terms. I have two answers
to these cases. Answer 1 is that it is still easy for a human to look at
the closest matches to
--- Philip Goetz [EMAIL PROTECTED] wrote:
On 11/30/06, James Ratcliff [EMAIL PROTECTED] wrote:
One good one:
Consciousness is a quality of the mind generally regarded to comprise
qualities such as subjectivity, self-awareness, sentience, sapience, and
the
ability to perceive the
A little late on the draw here - I am a new member to the list and was
checking out the archives. I had an insight into this debate over
understanding.
James Ratcliff wrote:
Understanding is a dum-dum word, it must be specifically defined as a
concept
or not used. Understanding art is a
[EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 6:21 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Yes, it was insulting. I am sorry. However, I don't think this
conversation is going anywhere. There are many, many examples just of
the use
]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 9:36 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has
Would you argue that any of your examples produce good results that are
not comprehensible by humans? I know that you sometimes will argue that the
systems can find patterns that are both the real-world simplest explanation
and still too complex for a human to understand -- but I don't
Richard Loosemore [EMAIL PROTECTED] wrote: Philip Goetz wrote:
On 11/17/06, Richard Loosemore wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of
).
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 30, 2006 9:30 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
Would you argue that any of your examples produce good results that
are
not comprehensible
On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Models that are simple enough to debug are too simple to scale.
The contents of a knowledge base for AGI will be beyond our ability to
comprehend.
Given sufficient time, anything should be able to be understood and
is something akin to I don't understand it so it must
be good.
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:53 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Mark Waser [EMAIL
@v2.listbox.com
Sent: Wednesday, November 29, 2006 2:13 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
AI is about solving problems that you can't solve yourself. You can program
a computer to beat you at chess. You understand the search algorithm, but
can't execute
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them enough variables they can brute
force solutions through
overlooked several thousand examples is pretty
insulting).
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 4:17 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Mark Waser [EMAIL
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
If you look into the literature of the past 20 years, you will easily
find several thousand examples.
I'm sorry but either you didn't understand my point or you don't know
what you are talking about (and the constant terseness of your
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a reasonable connection
So what is your definition of understanding?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 5:36:39 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/19/06, Matt
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them
Philip Goetz wrote:
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a
Agreed,
but I think as a first level project I can accept the limitiation of modeliing
the AI 'as' a human, as we are a long way off of turning it loose as its own
robot, and this will allow it to act and reason more as we do. Currently I
have PersonAI as a subset of Person, where it will
Goals don't necessarily need to be complex or even explicitly defined. One
goal might just be to minimise the difference between experiences (whether
real or simulated) and expectations. In this way the system learns what a
normal state of being is, and detect deviations.
On 21/11/06,
I don't know that I'd consider that an example of an uncomplicated
goal. That seems to me much more complicated than simple responses to
sensory inputs. Valuable, yes, and even vital for any significant
intelligence, but definitely not at the minimal level of complexity.
An example of a
Things like finding recharging sockets are really more complex goals built
on top of more primitive systems. For example, if a robot heading for a
recharging socket loses a wheel its goals should change from feeding to
calling for help. If it cannot recognise a deviation from the normal
state
Well, in the language I normally use to discuss AI planning, this
would mean that
1)keeping charged is a supergoal
2)
The system knows (via hard-coding or learning) that
finding the recharging socket == keeping charged
(i.e. that the former may be considered a subgoal of the latter)
3)
The
On 11/22/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Well, in the language I normally use to discuss AI planning, this
would mean that
1)keeping charged is a supergoal
2)The system knows (via hard-coding or learning) that
finding the recharging socket == keeping charged
If charged becomes
Have to amend that to acts or replies
and it could react unpredictably depending on the humans level of
understanding if it sees a nice neat answer, (like the jumping thru the
window cause the door was blocked) that the human wasnt aware of, or was
suprised about it would be equally good.
]
- Original Message
From: Mike Dougherty [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 1:32:05 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
I'm not sure I follow every twist in this thread. No... I'm sure I don't
follow every twist
distribution of all environments).
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 7:42:19 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Have to amend that to acts
OK.
James Ratcliff wrote:
Have to amend that to acts or replies
I consider a reply an action. I'm presuming that one can monitor the
internal state of the program.
and it could react unpredictably depending on the humans level of
understanding if it sees a nice neat answer, (like the
is used to evaluate the quality of the
prediction.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 1:41:41 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
The main first
*
that intelligence does . . . .
- Original Message -
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Friday, November 17, 2006 9:13 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
I think that generaliziation via lossless compression could more readily
Ben Goertzel wrote:
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair.
The
, 2006 9:13AM
Subject: Re: [agi] A question on thesymbol-system hypothesis
I think that generaliziation via lossless compression couldmore readily be
a Requirement for an AGI.
Also I must agree with Mattthat you cant have knowledge seperate from other
knowledge, everything
Ben Goertzel wrote:
...
On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.
...
Ben
Given that purpose, I propose the
I'm not sure I follow every twist in this thread. No... I'm sure I don't
follow every twist in this thread.
I have a question about this compression concept. Compute the number of
pixels required to graph the Mandelbrot set at whatever detail you feel to
be a sufficient for the sake of
Furthermore we learned in class recently about a case where a person was
literally born with only half a brain, dont have that story but here is one:
http://abcnews.go.com/2020/Health/story?id=1951748page=1
I think all the talk about hard numbers is really off base unfortunatly and AI
shouldnt
.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 9:57:40 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
* reasons why it is disadvantageous --
and I know of no reasons why opacity is required for intelligence.
- Original Message -
From: Matt Mahoney
To:
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Sorry if I did not make clear
Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 7:20 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
1. The fact that AIXI^tl is intractable is not relevant to the proof that
compression = intelligence, any more than
: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 11:52 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED]
wrote:
So *prove* to me why information theory forbids transparency of a knowledge
base.
Isn't
assumptions are worthless.
Which assumptions are erroneous?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Richard Loosemore
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney
Matt Mahoney wrote:
Richard Loosemore [EMAIL PROTECTED] wrote:
5) I have looked at your paper and my feelings are exactly the same as
Mark's theorems developed on erroneous assumptions are worthless.
Which assumptions are erroneous?
Marcus Hutter's work is about abstract idealizations
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair.
The mathematical notion of a
: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED] wrote:
Give me a counter-example of knowledge that can't be isolated.
Q. Why did you turn left here
finish.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I consider the last question in each of your examples
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 1:41:41 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
The main first subtitle:
Compression is Equivalent to General
Subject: Re: [agi] A question on the symbol-system hypothesis
My point is that humans make decisions based on millions of facts, and we do
this every second. Every fact depends on other facts. The chain of
reasoning covers the entire knowledge base.
I said millions, but we really don't know
Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
are easily bounded.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I will try to answer several posts here. I said that the knowledge base of
an AGI
% of the information that it uses. It
simply needs to know where to find it upon need and how to use it.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system
]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:33:04 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more
controlled but
many complex and immense systems are easily bounded.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I will try to answer several posts
PROTECTED]
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:33:04 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
?
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Sorry if I did not make clear the distinction between knowing the learning
algorithm for AGI (which we
: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
Richard Loosemore [EMAIL PROTECTED] wrote:
Understanding 10^9 bits of information is not the same as storing 10^9
bits of information.
That is true. Understanding n bits is the same as compressing some larger
training
.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Sorry if I did not make clear the distinction between knowing the learning
algorithm for AGI (which we can
but unobtainable edge case, why do you believe that Hutter has
any relevance at all?
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Richard
Matt Mahoney wrote:
Richard, what is your definition of understanding? How would you test
whether a person understands art?
Turing offered a behavioral test for intelligence. My understanding of
understanding is that it is something that requires intelligence. The
connection between
Mark Waser wrote:
Are you conceding that you can predict the results of a Google
search?
OK, you are right. You can type the same query twice. Or if you live long
enough you can do it the hard way. But you won't.
Are you now conceding that it is not true that Models that are simple
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
The connection between intelligence and compression is not obvious
: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
Richard, what is your definition of understanding? How would you test
whether a person understands art
essage -
From:
Matt
Mahoney
To: agi@v2.listbox.com
Sent: Monday, November 13, 2006 10:22
PM
Subject: Re: Re: [agi] A question on the
symbol-system hypothesis
James
Ratcliff [EMAIL PROTECTED]
wrote:Well, words and language based ideas/terms adequatly describe
1 - 100 of 124 matches
Mail list logo