Hi Jiri,
OK, I pondered it for a while and the answer is -- failure modes.
Your logic is correct. If I were willing take all of your assumptions as
always true, then I would agree with you. However, logic, when it relies upon
single chain reasoning is relatively fragile. And when it
Hi again,
A few additional random comments . . . . :-)
Intelligence is meaningless without discomfort.
I would rephrase this as (or subsume this under) intelligence is
meaningless without goals -- because discomfort is simply something that sets
up a goal of avoid me.
But
What is meaning to a computer? Some people would say that no machine
can
know the meaning of text because only humans can understand language.
Nope. I am *NOT* willing to do the Searle thing. Machines will know the
meaning of text (i.e. understand it) when they have a coherent world model
--- Mark Waser [EMAIL PROTECTED] wrote:
The terms meaning and understanding are not well defined for machines.
Then rigorously define them for your purposes and stop complaining. If you
have an effective, coherent world model and if you can ground an input in
this model then you
My point, in that essay, is that the nature of human emotions is rooted in
the human brain architecture,
Mark I'll agree that human emotions are rooted in human brain
Mark architecture but there is also the question -- is there
Mark something analogous to emotion which is generally
OK, how about Legg's definition of universal intelligence as a measure of
how
a system understands its environment?
OK. What purpose do you wish to use Legg's definition for? You immediately
discard it below . . . .
Of course it is rather impractical to test a system in a
My view is that emotions are systems programmed in by the genome to
cause the computational machinery to pursue ends of interest to
evolution, namely those relevant to leaving grandchildren.
I would concur and rephrase it as follows: Human emotions are hard-coded
goals that were
--- Mark Waser [EMAIL PROTECTED] wrote:
OK, how about Legg's definition of universal intelligence as a measure of
how
a system understands its environment?
OK. What purpose do you wish to use Legg's definition for? You immediately
discard it below . . . .
What definition of
What definition of intelligence would you like to use?
Legg's definition is perfectly fine for me.
How about the answering machine test for intelligence? A machine passes
the
test if people prefer talking to it over talking to a human. For example,
I
prefer to buy airline tickets online
Mark Waser writes:
Intelligence is only as good as your model of the world and what it allows
you to do (which is pretty much a paraphrasing of Legg's definition as far
as I'm concerned).
Since Legg's definition is quite explicitly careful not to say anything
at all about the internal
Mark Waser wrote:
What is meaning to a computer? Some people would say that no
machine can
know the meaning of text because only humans can understand language.
Nope. I am *NOT* willing to do the Searle thing. Machines will know
the meaning of text (i.e. understand it) when they have a
My current thinking is that it will take lots of effort by multiple
people, to take a concept or prototype AGI and turn into something
that is useful in the real world. And even one or two people worked on
the correct concept for their whole lives it may not produce the full
thing, they may hit
Excellent question!
Legg's paper does talk about an agent being able to exploit any
regularities in the environment; simple agents doing very basic learning by
building up a table of observation and action pairs and keeping statistics on
the rewards that follow; and that It is
I think that we're in pretty close agreement but I disagree with a few of
your particular phrases.
But note that in this case world model is not a model of the same world
that you have a model of.
For the purposes of this discussion, I'm going to declare that there is an
external reality
William Pearson wrote:
My current thinking is that it will take lots of effort by multiple
people, to take a concept or prototype AGI and turn into something
that is useful in the real world. And even one or two people worked on
the correct concept for their whole lives it may not produce the
It seems like a lot of people are already highly motivated to work on AGI, and
have been for years. The real problem is that everyone is working indendently
because (1) you are not going to convince anyone that somebody else's approach
is better, and (2) everyone has a different idea of what AGI
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
Mark Waser wrote:
... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
...
But note that in this case world model is not a model of the same
On 5/2/07, Mark Waser [EMAIL PROTECTED] wrote:
One of the things that I think is *absolutely wrong* about Legg's
paper is that he only uses more history as an example of generalization. I
think that predictive power is test for intelligence (just as he states) but
that it *must* include
Why do you think that a Legg-Hutter style intelligence test would
not expose an agent to things it hadn't seen before?
I don't necessarily think that a Legg-Hutter style intelligence test would not
expose an agent to things it hadn't seen before. I was objecting to the fact
that your paper
No it's not prediction - nothing can predict the unexpected.
Unexpected is *totally* different from previously unseen. If you've seen
that 1, 3, 5, 7, 9, 11 are large black and that 2, 4, 6, 8, 10 12 are
small red but you've never seen 13, you still have expectations about it and
can
0 ridiculous about it.
It's totally to the point.
A problem about mathematical series, which you offer, is not an adaptive/AGI
problem. I assume that your machine knows about series. All AI machines will
continually be presented with unseen problems. A simple calculator will
encounter sums
Bo M:
- A way to switch between representations and thinking processes when one
set of methods fails. This would keep expert knowledge in one domain
connected to expert knowledge from other domains.
What if any approaches to MULTI-DOMAIN thinking actually exist, or have
been tried?
What other list of specific goals though would you posit a first generation AGI
accomplishing other than the ones I mentioned?
James Ratcliff
Mike Tintner [EMAIL PROTECTED] wrote: I think you're thinking in too
limited ways about the physical tasks, simulated or embodied - although that
The Speagram language framework allows programming in a natural language
like idiom
http://www.speagram.org/
IMO this is a fascinating and worthwhile experiment, but I'm not yet
convinced it makes programming any easier...
I believe a couple of the key authors of this language are on this
Mark,
logic, when it relies upon single chain reasoning is relatively fragile.
And when it rests upon bad assumptions, it can be just a roadmap to
disaster.
It all improves with learning. In my design (not implemented yet), AGI
learns from stories and (assuming it learned enough) can complete
25 matches
Mail list logo