Hi,
It may be of interest to those who refute Searle that there is actually
a solution being proposed that supposedly makes both sides happy: It
should make Searle happy because a form of AI approach is proposed that
does not rely solely on computation in a box; and it should make AI
engineers happy because it is an alternative, completely feasible,
approach to building AI -- something that Searle never offered.
In other words, Searle just said "computation is not good enough", but
did not suggest what would be good enough. Instead, he just said "we
don't know enough about the brain and consciousness".
The proposed solution still uses computers but also there is a need for
additional "hardware" in order for the whole thing to work. Let me
clarify the nature of that solution in the terms of Searle's heart pump
example. Searle tells us that it is not enough to simulate a heart pump
on a computer. A simulation cannot actually pump blood. We have to build
a machine that pumps blood. Only then we have a replication, and not
just a simulation.
Note that it would be ok to have a computer as a part of that machine
(for example for controlling the intensity of pumping, watching the
battery, etc.). But the actual pumping occurs outside the computer. As a
result, the whole thing (computer + pump) becomes not a simulation any
longer but a replication.
Similarly, in terms of intelligence capable of producing strong-AI, as
the proposal goes, we have to couple computers to the outside world
using sensors and effectors. If this coupling is made in a proper way,
the resulting "thought" does not occur in the computer any more, but
occurs as combination of the computer and the rest of the machinery. As
a result, we have a replication, not a simulation.
The coupling has to make such that a thought process (i.e., perception,
decision, etc.) never solely relies on a computer, but that the thought
occurs through stages that involve iterative interactions with the
environment (computer computes something, then environment gives
feedback, then computers does something again, etc.). This means that
simply hooking up a deep learning network to a camera and a robotic arm
would not do the job. This could still not satisfy the requirements
because deep learning network would still compute all of the stages of
processing "in the box".
It is also argued that this approach creates a much smarter AI than an
AI based solely "in the computational box".
I provide here an introduction to this novel approach to strong-AI:
http://ieet.org/index.php/IEET/more/nikolic20160108
I would very much appreciate feedback, opinions and critique (it is all
welcome).
Danko
On 21/01/16 03:26, Jim Bromer wrote:
I was disappointed by Searle's refusal to budge from the foundation of
his historical view. I want to go back and listen to his responses to
Kurzweil's comments because I stopped listening shortly after he
started to respond.
I disagree with the view point that the mind is a semantic processor
in the way Searle was arguing against computational theories of mind.
As a Christian I have no problem with the idea that there is a lot
that we do not understand about conscious experience and I think that
the explanations of how consciousness works may require theories about
matter (and energy) that are unimaginable to us now. However, the idea
that the basis of natural intelligence is not a syntactic
computational device (of some sort) is too far out there for me. I
also noticed that his distinction between epistemological knowledge
and ontological knowledge is excessive. I think it is very useful to
make that kind of distinction and I mean that it is very important to
be able to think about that. But on the other hand the idea that
epistemological knowledge cannot be used to encode ontological
knowledge is so far removed from commonsense that I have to reject the
idea that there is some kind of absolute distinction.
Jim Bromer
On Wed, Jan 20, 2016 at 5:14 PM, EdFromNH . <[email protected]> wrote:
Jim Bromer,
I only listened to about 15 minutes a the start of the video (it seemed
similar to previous speeches I have heard from him), and skipped ahead to
hear Kurzweil questioning him about 40 minutes in, and then quit after
hearing his response to Ray.
One of Searle's main mistakes is his claim that digital computers are
syntactical, that humans think largely semantically (which I agree with),
and that syntactical computation can never compute semantics (which I
disagree with). One of the major philosophical advancements in
understanding cognitive computing is that through grounding with massive
experientially connected experiential data syntax can, in fact, compute
semantics. The advances being made in deep learning strongly support this.
For example, deep learning indicates the visual meaning of a concept such as
"cat", with all of its rich possible visual variations can be understood by
what Searle calls a syntactical system. If deep learning systems for vision
were connected with deep learning systems for hearing, touch, emotions,
goals, behaviors, etc, the combined system would have even a much richer
understanding of the meaning of a word such as "cat".
So Searle's thinking is deeply flawed. But Searle's notion that
consciousness requires computation having qualities shared by biological
brains that are not shared by current computers, even current deep learning
systems, is not clearly wrong.
Ed Porter
P.S. Jim, If you get this message. please given me a brief ping to say you
have. For the last several months all of my posts to [email protected] have
been returned with an error message. I have emailed Ben directly to ask
what the problem is and he has not responded. I am trying to determine if I
have been ejected from this list because I dared to ask to publicly debate
with Ben about his dismissal of my Computational Awarenes Theory of
Consciousness, or if there is some technical error.
On Mon, Jan 18, 2016 at 5:22 PM, Jim Bromer <[email protected]> wrote:
John Searle: "Consciousness in Artificial Intelligence" | Talks at Google.
I just started listening to it, but it is interesting. He starts with
epistemic objectivity and ontological subjectivity (but promises to
avoid using too many polysyllabic words.)
https://www.youtube.com/watch?v=rHKwIYsPXLg
Jim Bromer
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/27953441-c5c84d1c
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com