Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi Jiri, OK, I pondered it for a while and the answer is -- failure modes. Your logic is correct. If I were willing take all of your assumptions as always true, then I would agree with you. However, logic, when it relies upon single chain reasoning is relatively fragile. And when it

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
Hi again, A few additional random comments . . . . :-) Intelligence is meaningless without discomfort. I would rephrase this as (or subsume this under) intelligence is meaningless without goals -- because discomfort is simply something that sets up a goal of avoid me. But

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
What is meaning to a computer? Some people would say that no machine can know the meaning of text because only humans can understand language. Nope. I am *NOT* willing to do the Searle thing. Machines will know the meaning of text (i.e. understand it) when they have a coherent world model

Re: [agi] rule-based NL system

2007-05-02 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: The terms meaning and understanding are not well defined for machines. Then rigorously define them for your purposes and stop complaining. If you have an effective, coherent world model and if you can ground an input in this model then you

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Eric Baum
My point, in that essay, is that the nature of human emotions is rooted in the human brain architecture, Mark I'll agree that human emotions are rooted in human brain Mark architecture but there is also the question -- is there Mark something analogous to emotion which is generally

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
OK, how about Legg's definition of universal intelligence as a measure of how a system understands its environment? OK. What purpose do you wish to use Legg's definition for? You immediately discard it below . . . . Of course it is rather impractical to test a system in a

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Mark Waser
My view is that emotions are systems programmed in by the genome to cause the computational machinery to pursue ends of interest to evolution, namely those relevant to leaving grandchildren. I would concur and rephrase it as follows: Human emotions are hard-coded goals that were

Re: [agi] rule-based NL system

2007-05-02 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: OK, how about Legg's definition of universal intelligence as a measure of how a system understands its environment? OK. What purpose do you wish to use Legg's definition for? You immediately discard it below . . . . What definition of

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
What definition of intelligence would you like to use? Legg's definition is perfectly fine for me. How about the answering machine test for intelligence? A machine passes the test if people prefer talking to it over talking to a human. For example, I prefer to buy airline tickets online

Re: [agi] rule-based NL system

2007-05-02 Thread DEREK ZAHN
Mark Waser writes: Intelligence is only as good as your model of the world and what it allows you to do (which is pretty much a paraphrasing of Legg's definition as far as I'm concerned). Since Legg's definition is quite explicitly careful not to say anything at all about the internal

Re: [agi] rule-based NL system

2007-05-02 Thread Charles D Hixson
Mark Waser wrote: What is meaning to a computer? Some people would say that no machine can know the meaning of text because only humans can understand language. Nope. I am *NOT* willing to do the Searle thing. Machines will know the meaning of text (i.e. understand it) when they have a

[agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread William Pearson
My current thinking is that it will take lots of effort by multiple people, to take a concept or prototype AGI and turn into something that is useful in the real world. And even one or two people worked on the correct concept for their whole lives it may not produce the full thing, they may hit

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
Excellent question! Legg's paper does talk about an agent being able to exploit any regularities in the environment; simple agents doing very basic learning by building up a table of observation and action pairs and keeping statistics on the rewards that follow; and that It is

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
I think that we're in pretty close agreement but I disagree with a few of your particular phrases. But note that in this case world model is not a model of the same world that you have a model of. For the purposes of this discussion, I'm going to declare that there is an external reality

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread Richard Loosemore
William Pearson wrote: My current thinking is that it will take lots of effort by multiple people, to take a concept or prototype AGI and turn into something that is useful in the real world. And even one or two people worked on the correct concept for their whole lives it may not produce the

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread Matt Mahoney
It seems like a lot of people are already highly motivated to work on AGI, and have been for years. The real problem is that everyone is working indendently because (1) you are not going to convince anyone that somebody else's approach is better, and (2) everyone has a different idea of what AGI

Re: [agi] rule-based NL system

2007-05-02 Thread J. Storrs Hall, PhD.
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote: Mark Waser wrote: ... Machines will know the meaning of text (i.e. understand it) when they have a coherent world model that they ground their usage of text in. ... But note that in this case world model is not a model of the same

Re: [agi] rule-based NL system

2007-05-02 Thread Shane Legg
On 5/2/07, Mark Waser [EMAIL PROTECTED] wrote: One of the things that I think is *absolutely wrong* about Legg's paper is that he only uses more history as an example of generalization. I think that predictive power is test for intelligence (just as he states) but that it *must* include

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
Why do you think that a Legg-Hutter style intelligence test would not expose an agent to things it hadn't seen before? I don't necessarily think that a Legg-Hutter style intelligence test would not expose an agent to things it hadn't seen before. I was objecting to the fact that your paper

Re: [agi] rule-based NL system

2007-05-02 Thread Mark Waser
No it's not prediction - nothing can predict the unexpected. Unexpected is *totally* different from previously unseen. If you've seen that 1, 3, 5, 7, 9, 11 are large black and that 2, 4, 6, 8, 10 12 are small red but you've never seen 13, you still have expectations about it and can

Re: [agi] rule-based NL system

2007-05-02 Thread Mike Tintner
0 ridiculous about it. It's totally to the point. A problem about mathematical series, which you offer, is not an adaptive/AGI problem. I assume that your machine knows about series. All AI machines will continually be presented with unseen problems. A simple calculator will encounter sums

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread Mike Tintner
Bo M: - A way to switch between representations and thinking processes when one set of methods fails. This would keep expert knowledge in one domain connected to expert knowledge from other domains. What if any approaches to MULTI-DOMAIN thinking actually exist, or have been tried?

Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-05-02 Thread James Ratcliff
What other list of specific goals though would you posit a first generation AGI accomplishing other than the ones I mentioned? James Ratcliff Mike Tintner [EMAIL PROTECTED] wrote: I think you're thinking in too limited ways about the physical tasks, simulated or embodied - although that

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread Benjamin Goertzel
The Speagram language framework allows programming in a natural language like idiom http://www.speagram.org/ IMO this is a fascinating and worthwhile experiment, but I'm not yet convinced it makes programming any easier... I believe a couple of the key authors of this language are on this

Re: [agi] Pure reason is a disease.

2007-05-02 Thread Jiri Jelinek
Mark, logic, when it relies upon single chain reasoning is relatively fragile. And when it rests upon bad assumptions, it can be just a roadmap to disaster. It all improves with learning. In my design (not implemented yet), AGI learns from stories and (assuming it learned enough) can complete