Matt was not arguing over whether what an AI does should be called
"understanding" or "statistics". Matt was discussing what the right
way to design an AI is.
And Matt made a number of statements that I took issue with -- the current
one being that an AI's reasoning wouldn't be human-understandable. Why
don't we stick with that point?
It is the human who (at first) designs the AI.
And your point is? I'm arguing that the AI's reasoning should/will be
human-understandable. You're arguing that it will not be. And then, you're
arguing that since it is the human who (at first) designs the AI that it
proves *your* point?
Designs that require the designer to have super-human abilities
are poor designs.
Designs that require infeasible computational requirements are poor designs.
Designs that can't be debugged are poor designs. I'm not requiring
super-human abilities at all -- *you* are. It is *your* contention that
understanding the AI's reasoning will require superhuman abilities. I don't
see that at all. It's all just data and algorithms.
Your previous "example" of vectors not being understandable because it is
millions of data points conflates several interpretations of understanding
to confuse the issue and doesn't prove your point at all. Mathematically,
vector fields are fundamentally isomorphic with neural networks and/or
matrix algebra. In all three cases, you are deriving (via various methods)
the best n equations to describe a given test dataset. Given a given
*ordered* data set and the training algorithm, a human can certainly
calculate the final vectors/weights/equations. A human who knows the
current vectors/weights/equations can certainly calculate the output when a
system is presented with a given new point. What a human can't do is to
describe why, in the real world, that particular vector may be optimum and
the reason why the human can't is because *IT IS NOT OPTIMUM* for the real
world except in toy cases! All three of the methods are *very* subject to
overfitting and numerous other maladies unless a) the number of
vectors/nodes/equations is exactly correct for the problem (and we currently
don't know any good algorithms to ensure this) and b) the number of test
examples is *much* larger than the variables involved in the solution and
the vectors/network are/is *very* thoroughly trained (either computationally
infeasible for large, complicated problems with many variables if you try to
go for the minimal correct number of vectors/nodes/equations OR having only
nearest match capability and *zero* predictive power if you allow too many
vectors/nodes/equations).
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them enough variables they can brute
force solutions through what is effectively case-based/nearest-neighbor
reasoning but that is *not* intelligence. You and they can't build upon
that.
Thus, the machine-learning black-box approach is a better design.
Why? Although this is a nice use of buzzwords, I strongly disagree for
numerous reasons and, despite your "thus", your previous arguments certainly
don't lead to this conclusion. Obviously, any design that I consider is
using machine-learning -- but machine-learning does not imply black-box . .
. . And since all black-box means is that you can't see inside it, it only
seems like an invitation to disaster to me. So why is it a better design?
All that I see here is something akin to "I don't understand it so it must
be good".
----- Original Message -----
From: "Philip Goetz" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Wednesday, November 29, 2006 1:53 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> A human doesn't have enough time to look through millions of pieces of
> data, and doesn't have enough memory to retain them all in memory, and
> certainly doesn't have the time or the memory to examine all of the
> 10^(insert large number here) different relationships between these
> pieces of data.
True, however, I would argue that the same is true of an AI. If you
assume
that an AI can do this, then *you* are not being pragmatic.
Understanding is compiling data into knowledge. If you're just brute
forcing millions of pieces of data, then you don't understand the
problem --
though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.
Matt was not arguing over whether what an AI does should be called
"understanding" or "statistics". Matt was discussing what the right
way to design an AI is. It is the human who (at first) designs the
AI. Designs that require the designer to have super-human abilities
are poor designs. Thus, the machine-learning black-box approach is a
better design.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303