Re: [agi] A question on the symbol-system hypothesis

2006-12-26 Thread Philip Goetz
On 12/2/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: I know a little about network intrusion anomaly detection (it was my dissertation topic), and yes it is an important lessson. The reason such anomalies occur is because when attackers craft exploits, they follow enough of the protocol to make

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-14 Thread Ricardo Barreira
On 12/13/06, Philip Goetz <[EMAIL PROTECTED]> wrote: On 12/5/06, BillK <[EMAIL PROTECTED]> wrote: It is a little annoying that he doesn't mention Damasio at all, when Damasio has been pushing this same thesis for nearly 20 years, and even popularized it in "Descartes' Error". (Disclaimer: I did

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-13 Thread Philip Goetz
On 12/5/06, BillK <[EMAIL PROTECTED]> wrote: The good news is that Minsky appears to be making the book available online at present on his web site. *Download quick!* See under publications, chapters 1 to 9. The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Matt Mahoney
--- Ben Goertzel <[EMAIL PROTECTED]> wrote: > Matt Maohoney wrote: > > My point is that when AGI is built, you will have to trust its answers > based > > on the correctness of the learning algorithms, and not by examining the > > internal data or tracing the reasoning. > > Agreed... > > >I beli

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: On 12/5/06, Charles D Hixson wrote: BillK wrote: > ... > No time inversion intended. What I intended to say was that most (all?) decisions are made subconsciously before the conscious mind starts its reason / excuse generation process. The conscious mind pretending to weigh vario

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK
On 12/5/06, Charles D Hixson wrote: BillK wrote: > ... > > Every time someone (subconsciously) decides to do something, their > brain presents a list of reasons to go ahead. The reasons against are > ignored, or weighted down to be less preferred. This applies to > everything from deciding to get

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: ... Every time someone (subconsciously) decides to do something, their brain presents a list of reasons to go ahead. The reasons against are ignored, or weighted down to be less preferred. This applies to everything from deciding to get a new job to deciding to sleep with your best

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread BillK
On 12/5/06, Richard Loosemore wrote: There are so few people who speak up against the conventional attitude to the [rational AI/irrational humans] idea, it is such a relief to hear any of them speak out. I don't know yet if I buy everything Minsky says, but I know I agree with the spirit of it.

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Message - From:James Ratcliff To: agi@v2.listbox.com Sent: Tuesday, December 05, 2006 11:34 AM Subject: Re: [agi] A question on thesymbol-system hypothesis Mark Waser <[EMAIL PROTECTED]> wrote: > Are > you saying that the more excuses we ca

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
mber 05, 2006 11:34 AM Subject: Re: [agi] A question on the symbol-system hypothesis Mark Waser <[EMAIL PROTECTED]> wrote: > Are > you saying that the more excuses we can think up, the more intelligent > we are? (Actually there might be something in that!). S

Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread Richard Loosemore
Mark Waser wrote: Talk about fortuitous timing . . . . here's a link on Marvin Minsky's latest about emotions and rational thought http://www.boston.com/news/globe/health_science/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/ The most relevant line to

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
ames Ratcliff To: agi@v2.listbox.com Sent: Tuesday, December 05, 2006 11:17AM Subject: Re: Re: Re: Re: [agi] A questionon the symbol-system hypothesis BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote: > > Explaining our actions is the reflec

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Mark Waser <[EMAIL PROTECTED]> wrote: > Are > you saying that the more excuses we can think up, the more intelligent > we are? (Actually there might be something in that!). Sure. Absolutely. I'm perfectly willing to contend that it takes intelligence to come up with excuses and that more intell

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
From: James Ratcliff To: agi@v2.listbox.com Sent: Tuesday, December 05, 2006 11:17 AM Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote: > > Explaining our actions is the re

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote: > > Explaining our actions is the reflective part of our minds evaluating the > reflexive part of our mind. The reflexive part of our minds, though, > operates analogously to a machine running on compiled code with the > compilation

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
ark Waser" <[EMAIL PROTECTED]> To: Sent: Tuesday, December 05, 2006 10:05 AM Subject: Re: [agi] A question on the symbol-system hypothesis >> Are >> you saying that the more excuses we can think up, the more intelligent >> we are? (Actually there might be something in that

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
o be congruent with them (and even more so in well-balanced and happy individuals). ----- Original Message ----- From: "BillK" <[EMAIL PROTECTED]> To: Sent: Tuesday, December 05, 2006 7:03 AM Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis On 12/4/06,

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty
On 12/5/06, BillK <[EMAIL PROTECTED]> wrote: Your reasoning is getting surreal. You seem to have a real difficulty in admitting that humans behave irrationally for a lot (most?) of the time. Don't you read newspapers? You can redefine rationality if you like to say that all the crazy people are

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK
On 12/4/06, Mark Waser wrote: Explaining our actions is the reflective part of our minds evaluating the reflexive part of our mind. The reflexive part of our minds, though, operates analogously to a machine running on compiled code with the compilation of code being largely *not* under the con

Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
list don't even agree on what it means much less what it's implications are . . . . - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Monday, December 04, 2006 2:03 PM Subject: Re: [agi] A question on the symbol-system hypothesis On 12/

Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Philip Goetz
On 12/3/06, Mark Waser <[EMAIL PROTECTED]> wrote: > This sounds very Searlian. The only "test" you seem to be referring to > is the Chinese Room test. You misunderstand. The test is being able to form cognitive structures that can serve as the basis for later more complicated cognitive structu

Re: Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel
But I'm not at all sure how important that difference is . . . . With the brain being a massively parallel system, there isn't necessarily a huge advantage in "compiling knowledge" (I can come up with both advantages and disadvantages) and I suspect that there are more than enough surprises that

Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
On the other hand, I think that lack of compilation is going to turn out to be a *very* severe problem for non-massively parallel systems - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Monday, December 04, 2006 1:00 PM Subject: Re: Re: Re: Re:

Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel
> Well, of course they can be explained by me -- but the acronym for > that sort of explanation is "BS" I take your point with important caveats (that you allude to). Yes, nearly all decisions are made as reflexes or pattern-matchings on what is effectively compiled knowledge; however, it is the

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
achine is (or, in reverse, no explanation = no intelligence). - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Monday, December 04, 2006 12:17 PM Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis >> We're reach

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
age - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Monday, December 04, 2006 10:45 AM Subject: Re: Re: [agi] A question on the symbol-system hypothesis On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote: > Philip Goetz gave an example of an intrusion detection s

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel
We're reaching the point of agreeing to disagree except . . . . Are you really saying that nearly all of your decisions can't be explained (by you)? Well, of course they can be explained by me -- but the acronym for that sort of explanation is "BS" One of Nietzsche's many nice quotes is (parap

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
:-) - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Monday, December 04, 2006 11:21 AM Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis Hi, The only real case where a human couldn't understand the machine's

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel
Hi, The only real case where a human couldn't understand the machine's reasoning in a case like this is where there are so many entangled variables that the human can't hold them in comprehension -- and I'll continue my contention that this case is rare enough that it isn't going to be a problem

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
or logic to effectively do statistics, then you're fine -- but I really don't see it happening. I also am becoming more and more aware of how much feature extraction and isolation is critical to my view of AGI. - Original Message - From: "Ben Goertzel" <[E

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel
On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote: > Philip Goetz gave an example of an intrusion detection system that learned > information that was not comprehensible to humans. You argued that he > could > have understood it if he tried harder. No, I gave five separate alternatives most

Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
nt that an AGI is going to have to be able to explain/be explained. - Original Message ----- From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Saturday, December 02, 2006 5:17 PM Subject: Re: [agi] A question on the symbol-system hypothesis > > --- Mark Waser &l

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Ben Goertzel
Matt Maohoney wrote: My point is that when AGI is built, you will have to trust its answers based on the correctness of the learning algorithms, and not by examining the internal data or tracing the reasoning. Agreed... I believe this is the fundamental flaw of all AI systems based on structu

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Matt Mahoney
hat an AGI is going to have > > to be able to explain/be explained. > > > - Original Message ----- > From: "Matt Mahoney" <[EMAIL PROTECTED]> > To: > Sent: Saturday, December 02, 2006 5:17 PM > Subject: Re: [agi] A question on the symbol-system hypothesis &g

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Mark Waser
ot do this. - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Sunday, December 03, 2006 9:17 AM Subject: Re: [agi] A question on the symbol-system hypothesis On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote: A nice story but it proves absolutely

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Mark Waser
t an AGI is going to have to be able to explain/be explained. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Saturday, December 02, 2006 5:17 PM Subject: Re: [agi] A question on the symbol-system hypothesis --- Mark Waser <[EMAIL PRO

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Charles D Hixson
iable-quality explanation of why it was a better move). ... Mark - Original Message - From: "BillK" <[EMAIL PROTECTED]> To: Sent: Saturday, December 02, 2006 2:31 PM Subject: Re: [agi] A question on the symbol-system hypothesis ... - This list is sponsored by AGIR

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Philip Goetz
On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote: A nice story but it proves absolutely nothing . . . . . It proves to me that there is no point in continuing this debate. Further, and more importantly, the pattern matcher *doesn't* understand it's results either and certainly could build upo

Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Matt Mahoney
and certainly could build upon them -- thus, it *fails* the > test as far as being the central component of an RSIAI or being able to > provide evidence as to the required behavior of such. > > - Original Message - > From: "Philip Goetz" <[EMAIL PROTECTED]>

Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Mark Waser
engineering problem rather than a science problem. Yes, my bridge isn't going to hold up near a black hole, but it is certainly sufficient for near-human conditions. Mark - Original Message - From: "BillK" <[EMAIL PROTECTED]> To: Sent: Saturday, Decemb

Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread BillK
On 12/2/06, Mark Waser wrote: My contention is that the pattern that it found was simply not translated into terms you could understand and/or explained. Further, and more importantly, the pattern matcher *doesn't* understand it's results either and certainly could build upon them -- thus, it *

Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Mark Waser
ld upon them -- thus, it *fails* the test as far as being the central component of an RSIAI or being able to provide evidence as to the required behavior of such. - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Friday, December 01, 2006 7:02 PM

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Kashif Shah
A little late on the draw here - I am a new member to the list and was checking out the archives. I had an insight into this debate over understanding. James Ratcliff wrote: ""Understanding" is a dum-dum word, it must be specifically defined as a concept or not used. Understanding art is a Sub

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Matt Mahoney
--- Philip Goetz <[EMAIL PROTECTED]> wrote: > On 11/30/06, James Ratcliff <[EMAIL PROTECTED]> wrote: > > One good one: > > Consciousness is a quality of the mind generally regarded to comprise > > qualities such as subjectivity, self-awareness, sentience, sapience, and > the > > ability to percei

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz
On 11/30/06, Mark Waser <[EMAIL PROTECTED]> wrote: With many SVD systems, however, the representation is more vector-like and *not* conducive to easy translation to human terms. I have two answers to these cases. Answer 1 is that it is still easy for a human to look at the closest matches t

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz
On 11/30/06, James Ratcliff <[EMAIL PROTECTED]> wrote: One good one: Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. (Bloc

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser
mited humans have run out of capacity -- not the complete change in understanding that you see between us and the lower animals). - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Thursday, November 30, 2006 9:30 AM Subject: Re: Re: [agi] A question on

Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread James Ratcliff
Richard Loosemore <[EMAIL PROTECTED]> wrote: Philip Goetz wrote: > On 11/17/06, Richard Loosemore wrote: >> I was saying that *because* (for independent reasons) these people's >> usage of terms like "intelligence" is so disconnected from commonsense >> usage (they idealize so extremely that the s

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Ben Goertzel
Would you argue that any of your examples produce good results that are not comprehensible by humans? I know that you sometimes will argue that the systems can find patterns that are both the real-world simplest explanation and still too complex for a human to understand -- but I don't believ

Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser
7;t understand it :-). - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 9:36 PM Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser
- From: "Mark Waser" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 6:21 PM Subject: Re: [agi] A question on the symbol-system hypothesis Yes, it was insulting. I am sorry. However, I don't think this conversation is going anywhere. There are many, many examples

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Richard Loosemore
Philip Goetz wrote: On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: I was saying that *because* (for independent reasons) these people's usage of terms like "intelligence" is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears a

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Ben Goertzel
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote: On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote: > I defy you to show me *any* black-box method that has predictive power > outside the bounds of it's training set. All that the black-box methods are > doing is curve-fitting. If you give t

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Matt Mahoney
So what is your definition of "understanding"? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, November 29, 2006 5:36:39 PM Subject: Re: [agi] A question on the symbol-system hypothe

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/17/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: I was saying that *because* (for independent reasons) these people's usage of terms like "intelligence" is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears a reasonable connectio

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
ncomprehensible about that? Why *can't* I debug a wrong answer (assuming that I have access to the training corpus)? - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 5:17 PM Subject: Re: [agi] A question on the sym

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote: > If you look into the literature of the past 20 years, you will easily > find several thousand examples. I'm sorry but either you didn't understand my point or you don't know what you are talking about (and the constant terseness of your re

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
contending/assuming that I've overlooked several thousand examples is pretty insulting). - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 4:17 PM Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote: I defy you to show me *any* black-box method that has predictive power outside the bounds of it's training set. All that the black-box methods are doing is curve-fitting. If you give them enough variables they can brute force solutions through

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
tt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 2:13 PM Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis AI is about solving problems that you can't solve yourself. You can program a computer to beat you at chess. You understand the s

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
ns is that you can't see inside it, it only seems like an invitation to disaster to me. So why is it a better design? All that I see here is something akin to "I don't understand it so it must be good". ----- Original Message - From: "Philip Goetz" <[EMA

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Matt Mahoney
-- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, November 29, 2006 1:25:33 PM Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis > A human doesn't have enough time to look th

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote: > A human doesn't have enough time to look through millions of pieces of > data, and doesn't have enough memory to retain them all in memory, and > certainly doesn't have the time or the memory to examine all of the > 10^(insert large number here

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
e problem -- though you may be able to solve it -- and validating your answers and placing intelligent/rational boundaries/caveats on them is not possible. - Original Message - From: "Philip Goetz" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 29, 2006 1:14 PM Sub

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote: > Even now, with a relatively primitive system like the current > Novamente, it is not pragmatically possible to understand why the > system does each thing it does. Pragmatically possible obscures the point I was trying to make with Matt.

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: >> Models that are simple enough to debug are too simple to scale. >> The contents of a knowledge base for AGI will be beyond our ability to comprehend. Given sufficient time, anything should be able to be understood an

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-23 Thread Ben Goertzel
It would be an interesting and appropriate development, of course,... Just as in humans, for instance, the goal of "getting laid" sometimes generates the subgoal of "talking to others" ... it seems indirect at first, but can be remarkably effective ;=) ben On 11/23/06, Mike Dougherty <[EMAIL PR

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Mike Dougherty
On 11/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: Well, in the language I normally use to discuss AI planning, this would mean that 1)keeping charged is a supergoal 2)The system knows (via hard-coding or learning) that finding the recharging socket ==> keeping charged If "charged" become

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Ben Goertzel
Well, in the language I normally use to discuss AI planning, this would mean that 1)keeping charged is a supergoal 2) The system knows (via hard-coding or learning) that finding the recharging socket ==> keeping charged (i.e. that the former may be considered a subgoal of the latter) 3) The s

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram
Things like finding recharging sockets are really more complex goals built on top of more primitive systems. For example, if a robot heading for a recharging socket loses a wheel its goals should change from feeding to calling for help. If it cannot recognise a deviation from the "normal" state

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Charles D Hixson
I don't know that I'd consider that an example of an uncomplicated goal. That seems to me much more complicated than simple responses to sensory inputs. Valuable, yes, and even vital for any significant intelligence, but definitely not at the minimal level of complexity. An example of a min

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram
Goals don't necessarily need to be complex or even explicitly defined. One "goal" might just be to minimise the difference between experiences (whether real or simulated) and expectations. In this way the system learns what a normal state of being is, and detect deviations. On 21/11/06, Charl

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread James Ratcliff
Agreed, but I think as a first level project I can accept the limitiation of modeliing the AI 'as' a human, as we are a long way off of turning it loose as its own robot, and this will allow it to act and reason more as we do. Currently I have PersonAI as a subset of Person, where it will inhe

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Charles D Hixson
OK. James Ratcliff wrote: Have to amend that to "acts or replies" I consider a reply an action. I'm presuming that one can monitor the internal state of the program. and it could react "unpredictably" depending on the humans level of understanding if it sees a nice neat answer, (like the

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
ed over a Solomonoff distribution of all environments). -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, November 18, 2006 7:42:19 AM Subject: Re: [agi] A question on the symbol-system hypothesis H

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
ney, [EMAIL PROTECTED] - Original Message From: Mike Dougherty <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, November 18, 2006 1:32:05 AM Subject: Re: [agi] A question on the symbol-system hypothesis I'm not sure I follow every twist in this thread. No... I'm sur

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread James Ratcliff
Have to amend that to "acts or replies" and it could react "unpredictably" depending on the humans level of understanding if it sees a nice neat answer, (like the jumping thru the window cause the door was blocked) that the human wasnt aware of, or was suprised about it would be equally goo

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mike Dougherty
I'm not sure I follow every twist in this thread. No... I'm sure I don't follow every twist in this thread. I have a question about this compression concept. Compute the number of pixels required to graph the Mandelbrot set at whatever detail you feel to be a sufficient for the sake of example.

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Charles D Hixson
Ben Goertzel wrote: ... On the other hand, the notions of "intelligence" and "understanding" and so forth being bandied about on this list obviously ARE intended to capture essential aspects of the commonsense notions that share the same word with them. ... Ben Given that purpose, I propose the

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
intelligence does . . . . - Original Message ----- From: James Ratcliff To: agi@v2.listbox.com Sent: Friday, November 17, 2006 9:13AM Subject: Re: [agi] A question on thesymbol-system hypothesis I think that generaliziation via lossless compression

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Richard Loosemore
Ben Goertzel wrote: "Rings" and "Models" are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore IMO these analogies are not fair. The ma

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mark Waser
e inconsistencies and correcting them with good effeciency. Yes! Exactly and absolutely! In fact, I would almost argue that this is *all* that intelligence does . . . . - Original Message - From: James Ratcliff To: agi@v2.listbox.com Sent: Friday, November 17, 2006 9:13 AM

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
for text prediction. The lossless compression is used to evaluate the quality of the prediction. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 1:41:41 PM Subject: Re: [agi] A que

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
PROTECTED] - Original Message From: Mark Waser To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 3:16:54 PM Subject: Re: [agi] A question on the symbol-system hypothesis I consider the last question in each of your examples to be unreasonable (though for very different reasons).

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
listbox.com Sent: Thursday, November 16, 2006 2:17 PM Subject: Re: [agi] A question on thesymbol-system hypothesis Inthe context of AIXI, intelligence is measured by an accumulated reward signal,and compression is defined by the size of a program (with respect to some

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
hursday, November 16, 2006 3:51 PM Subject: Re: [agi] A question on the symbol-system hypothesis My point is that humans make decisions based on millions of facts, and we do this every second. Every fact depends on other facts. The chain of reasoning covers the entire knowledge base. I said &qu

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: James Ratcliff <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 1:41:41 PM Subject: Re: [agi] A question on the symbol-system hypothesis The main first subtitle: Compression is Equivalent to G

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
r brain before you finish. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 3:16:54 PM Subject: Re: [agi] A question on the symbol-system hypothesis I consider the last question in

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
e can't do lossless compression). - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Thursday, November 16, 2006 3:15 PM Subject: Re: Re: [agi] A question on the symbol-system hypothesis "Rings" and "Models" are appropriated terms

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
ginal Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Thursday, November 16, 2006 3:01 PM Subject: Re: [agi] A question on the symbol-system hypothesis Mark Waser <[EMAIL PROTECTED]> wrote: Give me a counter-example of knowledge that can't be isolated

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Ben Goertzel
"Rings" and "Models" are appropriated terms, but the mathematicians involved would never be so stupid as to confuse them with the real things. Marcus Hutter and yourself are doing precisely that. I rest my case. Richard Loosemore IMO these analogies are not fair. The mathematical notion of

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
- Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 2:17 PM Subject: Re: [agi] A question on the symbol-system hypothesis In the context of AIXI, intelligence is measured by an accumulated reward signal, and compression is defin

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Richard Loosemore
Matt Mahoney wrote: Richard Loosemore <[EMAIL PROTECTED]> wrote: 5) I have looked at your paper and my feelings are exactly the same as Mark's theorems developed on erroneous assumptions are worthless. Which assumptions are erroneous? Marcus Hutter's work is about abstract idealizations

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
Mark Waser <[EMAIL PROTECTED]> wrote: >Give me a counter-example of knowledge that can't be isolated. Q. Why did you turn left here? A. Because I need gas. Q. Why do you need gas? A. Because the tank is almost empty. Q. How do you know? A. Because the needle is on "E". Q. How do you know? A. Becau

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
But I wouldn't doubt it. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 12:18:46 PM Subject: Re: [agi] A question on the symbol-system hypothesis 1. The fact that

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
: Mark Waser To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 9:57:40 AM Subject: Re: [agi] A question on the symbol-system hypothesis > The knowledge base has high complexity. You can't debug it. You can examine it and edit it but you can't verify it

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
paper and my feelings are exactly the same as > Mark's theorems developed on erroneous assumptions are worthless. Which assumptions are erroneous? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Richard Loosemore To: agi@v2.listbox.com Sent: Wednesday, November 15,

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
w why the statement is irrelevant, or d) concede the point? - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Thursday, November 16, 2006 11:52 AM Subject: Re: [agi] A question on the symbol-system hypothesis Mark Waser <[EMAIL PROTECTED]> wrote:

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
s absolutely nothing. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 15, 2006 7:20 PM Subject: Re: [agi] A question on the symbol-system hypothesis 1. The fact that AIXI^tl is intractable is not relevant to the proof that compression = intel

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
e driver's brain.") are even worse. The human brain *is* relatively opaque in it's operation but there is no good reason that I know of why this is advantageous and *many* reasons why it is disadvantageous -- and I know of no reasons why opacity is required for intelligence. - Origina

  1   2   >