Charles,
I don't think I've misunderstood what Turing was proposing. At least not any
more than the thousands of other people who have written about Turing and his
test over the decades:
http://en.wikipedia.org/wiki/Turing_test
http://www.zompist.com/turing.html (Twelve reasons to toss the
Eric:
Yes. An electronic mind need never forget important facts. It'd enjoy
instant recall and on-demand instantaneous binary-precision arithmetic
and all the other upshots of the substrate. On the other hand it
couldn't take, say, morphine!
It would though, presumably, have major problems
Ben:but, from a practical perspective, it seems more useful to think about
minds that are rougly similar to human minds, yet better adapted to existing
computer hardware, and lacking humans' most severe ethical and motivational
flaws
Well a) I think that we now agree that you are engaged in a
On Sun, Aug 10, 2008 at 9:02 AM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben:but, from a practical perspective, it seems more useful to think
about minds that are rougly similar to human minds, yet better adapted to
existing computer hardware, and lacking humans' most severe ethical and
On 8/10/08, Matt Mahoney [EMAIL PROTECTED] wrote:
rick the ponderer [EMAIL PROTECTED] wrote:
Regarding cempeting to buy information - I'm not suggesting that at
all, people would be competing to sell the services of their classifier
(and shopping around for the best classifier to consume or
On 8/10/08, rick the ponderer [EMAIL PROTECTED] wrote:
On 8/10/08, Matt Mahoney [EMAIL PROTECTED] wrote:
rick the ponderer [EMAIL PROTECTED] wrote:
Regarding cempeting to buy information - I'm not suggesting that at
all, people would be competing to sell the services of their classifier
Ben,
Obviously an argument too massive to be worth pursuing in detail. But just one
point - your arguments are essentially specialist focussing on isolated
anatomical rather than cognitive features, (and presumably we (science) don't
yet have the general, systemic overview necessary to
Agree that the human mind/brain has evolved to work reasonably effectively
in a holistic way, in spite of the obviously limitations of various of its
components...
To give a more cognitive example of a needless limitation of the human mind:
why can't we just remember a few hundred numbers in
2008/8/10 Mike Tintner [EMAIL PROTECTED]:
Just as you are in a rational, specialist way picking off isolated features,
so, similarly, rational, totalitarian thinkers used to object to the crazy,
contradictory complications of the democratic, conflict system of
decisionmaking by contrast with
Interesting conversation. I wanted to suggest something about how an AGI
might be qualitatively different from human. One possible difference
could be an overriding thoroughness. People generally don't put in the
effort to consider all the possibilities in the decisions they make, but
computers
And I've said it before, but it bears repeating in this context. Real
intelligence requires that mistakes be made. And that's at odds with
regular programming, because you are trying to write programs that don't
make mistakes, so I have to wonder how serious people really would be
about
me:
And I've said it before, but it bears repeating in this context. Real
intelligence requires that mistakes be made. And that's at odds with
regular programming, because you are trying to write programs that don't
make mistakes, so I have to wonder how serious people really would be
about
yes. This is one of the reasons why I like virtual world and game AI as
a commercial vehicle for the popularization and monetization of early-stage
AGI's.
No one cares that much if a game AI occasionally does something dumb.
It may even be considered charmingly funny. Much more so than if the
On Sun, Aug 10, 2008 at 5:52 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Will,
Maybe I should have explained the distinction more fully. A totalitarian
system is one with an integrated system of decisionmaking, and unified
goals. A democratic, conflict system is one that takes decisions with
Ben:By true rationality I simply mean making judgments in accordance with
probability theory based on one's goals and the knowledge at one's disposal.
Which is not applicable to AGI prob lems, which are wicked and ill-structured,
and where you cannot calculate probabilities, and are not sure of
Or even simpler problems, like : how were you to handle the angry Richard
recently? Your response, and I quote: Aaargh! (as in how on earth do I
calculate my probabilities and Bayes? and which school of psychological
thought is relevant here?) Now you're talking AGI. There is no rational or
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Sunday, August 10, 2008 8:00 AM
Subject: Re: [agi] The Necessity of Embodiment
... the best approaches are
1) wait till the brain scientists scan the brain well enough that, by
combining appropriate
I am aware of textbook neuroscience, but it really does not tell you enough
to let you emulate the brain.
Neuroanatomy plus single-neuron understanding is not enough...
OCP cannot benefit directly from detailed neuroscience knowledge as it is a
different sort of AGI system ... but a closely
I definitely agree w/ you, being that no-one is sure where even memory is
stored in the brain, much less thought, consc. etc.
Gross anatomy does prove useful when examining established input/output systems
modalities along the lines of a black box
model of the brain, wherein providing an
19 matches
Mail list logo