My point is that humans make decisions based on millions of facts, and we
do this every second.
Not! Humans make decisions based upon a very small number of pieces of
knowledge (possibly compiled from large numbers of *very* redundant data).
Further, these facts are generally arranged somewhat pyramidally. Humans do
not consider *anything* other than this small number of facts.
Every fact depends on other facts. The chain of reasoning covers the
entire knowledge base.
True but entirely irrelevant. Living in the world dictates the vast, vast
majority of that knowledge base. If that were not true, we could not
communicate with one another the way in which we do. For the purposes of
human decision-making, I would argue that *at most* humans use the facts
that they subsequently use to justify their decision.
I said "millions", but we really don't know. This is an important
number. Historically we have tended to underestimate it.
Assuming now that you mean facts (and not algorithms) that we need in our
knowledge base, I don't have any problem with this number.
If the number is small, then we *can* follow the reasoning, make changes
to the knowledge base and predict the outcome (provided the
representation is transparent and accessible through a formal language).
And even if the number is very large, then we *can* follow the reasoning,
make changes to the knowledge base and predict the outcome (provided the
representation is transparent and accessible).
But this leads us down a false path.
How so? The problem with previous systems is that they were small and then
expected to correctly be able generalize to cases that it was unreasonable
to expect them to cover. And, in particular, I believe that no one has yet
approached the number and breadth of algorithms/methods that you need to
have for a general intelligence -- particularly since I hesitate to believe
that there is a system with more than 100 truly different algorithms
(meaning separately coded and not automatically generated from underlying
algorithms and data).
We are not so smart that we can build a machine smarter than us, and
still be smarter than it.
Smart is not equivalent to algorithmic complexity and this is a
nonsensically nasty and incorrect rephrasing to a paradox solely designed to
win an argument. Try to keep civil, will you?
Either the AGI has more algorithmic complexity than you do, or it has
less.
Wrong. It has exactly the same algorithmic complexity (i.e. it can build to
any necessary arbitrary value as can any human). Now what does that do to
your arguments?
you will exhaust the memory in your brain before you finish
Huh? Aren't I allowed writing? Computers? I have effectively infinite
memory (when you consider how much I can actually use at one time). Don't
you?
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, November 16, 2006 3:51 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
My point is that humans make decisions based on millions of facts, and we do
this every second. Every fact depends on other facts. The chain of
reasoning covers the entire knowledge base.
I said "millions", but we really don't know. This is an important number.
Historically we have tended to underestimate it. If the number is small,
then we *can* follow the reasoning, make changes to the knowledge base and
predict the outcome (provided the representation is transparent and
accessible through a formal language). But this leads us down a false path.
We are not so smart that we can build a machine smarter than us, and still
be smarter than it. Either the AGI has more algorithmic complexity than you
do, or it has less. If it has less, then you have failed. If it has more,
and you try to explore the chain of reasoning, you will exhaust the memory
in your brain before you finish.
-- Matt Mahoney, [EMAIL PROTECTED]
----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I consider the last question in each of your examples to be unreasonable
(though for very different reasons).
In the first case, "What do you see?" is a nonsensical and unnecessary
extension on a rational chain of logic. The visual subsystem, which is not
part of the AGI, has reported something and, unless there is a good reason
not to, the AGI should believe it as a valid fact and the root of a
knowledge chain. Extending past this point to ask a spurious, open question
is silly. Doing so is entirely unnecessary. This knowledge chain is
isolated.
In the second case, I don't know why you're doing any sort of search
(particularly since there wasn't any sort of question preceding it). The AI
needed gas, it found a gas station, and it headed for it. You asked why it
waited til a given time and it told you. How is this not isolated?
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]> wrote:
Give me a counter-example of knowledge that can't be isolated.
Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on "E".
Q. How do you know?
A. Because I can see it.
Q. What do you see?
(depth first search)
Q. Why did you turn left here?
A. Because I need gas.
Q. Why did you turn left *here*?
A. Because there is a gas station.
Q. Why did you turn left now?
A. Because there is an opening in the traffic.
(breadth first search)
It's not that we can't do it in theory. It's that we can't do it in
practice. The human brain is not a Turing machine. It has finite time and
memory limits.
-- Matt Mahoney, [EMAIL PROTECTED]
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303