Dear Jim,
May I add a quantitative argument to our disagreement?
Here is a calculation making the point that, if one does not rely on
the environment for intelligence, the required size of the brain would
have to grow more than 10 orders of magnitude in comparison to what we
have today:
http://arxiv.org/abs/1505.00775
The same applies to AI. If you do not rely on the environment but want
to make all of the computations within the "box"--as it has mostly been
done today--, you may need a computer of the size of Manhattan in the
most optimistic scenario, and more likely of the size of the Moon.
In other words, the paper proposes a quantitative argument for taking
practopoiesis seriously. It asks two questions: (1) How much
computational power does the brain have? And, (2) how much computational
power human level intelligence needs? There is a huge discrepancy if we
take the today's dominant brain/AI theory. And there is no discrepancy
if we take the T3-approach of practopoiesis.
To reject the theory, one then needs to debunk the assumptions on which
the calculations were made. Or alternatively, one may conclude that the
calculations were reasonable and thus that the today's approach to AI
cannot produce AGI -- a conclusion reached solely on the basis of the
total amount of computational/memory resources that this approach would
require.
Best,
Danko
On 24/01/16 16:51, Jim Bromer wrote:
Danko,
Although the way you talk about your argument may be a bit unique the
theory that it might be necessary to combine sensors and robotics to
achieve AGI is not. I happen to think the theory is wrong. The data
that is coming into the computer and going out of the computer is
electronic. The problem is complexity. As an AI program attains more
knowledge (like narrow knowledge) then the complexity of the potential
number of ways that the knowledge can be combined is going to grow
combinatorially. Or else the program has to find some way to combine
the elements of narrow knowledge in a way which fudges them together
efficiently but that will makes it lossy. (That process might work one
day but I still would not think it is the best way to go about it.)
An AI program has to interact with the data environment in some way.
That stands as an essential part of the definition of AI.
But a computer only interacts with a data environment. Of course a
human brain is only interacting with 'its' sensory 'data'
environment.(The human brain is also interacting with the blood and so
on, but those other biological interactions do not seem as if they can
explain 'understanding' of the greater world.
Jim Bromer
On Thu, Jan 21, 2016 at 2:52 AM, Danko Nikolic
<[email protected]> wrote:
Hi,
It may be of interest to those who refute Searle that there is actually a
solution being proposed that supposedly makes both sides happy: It should
make Searle happy because a form of AI approach is proposed that does not
rely solely on computation in a box; and it should make AI engineers happy
because it is an alternative, completely feasible, approach to building AI
-- something that Searle never offered.
In other words, Searle just said "computation is not good enough", but did
not suggest what would be good enough. Instead, he just said "we don't know
enough about the brain and consciousness".
The proposed solution still uses computers but also there is a need for
additional "hardware" in order for the whole thing to work. Let me clarify
the nature of that solution in the terms of Searle's heart pump example.
Searle tells us that it is not enough to simulate a heart pump on a
computer. A simulation cannot actually pump blood. We have to build a
machine that pumps blood. Only then we have a replication, and not just a
simulation.
Note that it would be ok to have a computer as a part of that machine (for
example for controlling the intensity of pumping, watching the battery,
etc.). But the actual pumping occurs outside the computer. As a result, the
whole thing (computer + pump) becomes not a simulation any longer but a
replication.
Similarly, in terms of intelligence capable of producing strong-AI, as the
proposal goes, we have to couple computers to the outside world using
sensors and effectors. If this coupling is made in a proper way, the
resulting "thought" does not occur in the computer any more, but occurs as
combination of the computer and the rest of the machinery. As a result, we
have a replication, not a simulation.
The coupling has to make such that a thought process (i.e., perception,
decision, etc.) never solely relies on a computer, but that the thought
occurs through stages that involve iterative interactions with the
environment (computer computes something, then environment gives feedback,
then computers does something again, etc.). This means that simply hooking
up a deep learning network to a camera and a robotic arm would not do the
job. This could still not satisfy the requirements because deep learning
network would still compute all of the stages of processing "in the box".
It is also argued that this approach creates a much smarter AI than an AI
based solely "in the computational box".
I provide here an introduction to this novel approach to strong-AI:
http://ieet.org/index.php/IEET/more/nikolic20160108
I would very much appreciate feedback, opinions and critique (it is all
welcome).
Danko
On 21/01/16 03:26, Jim Bromer wrote:
I was disappointed by Searle's refusal to budge from the foundation of
his historical view. I want to go back and listen to his responses to
Kurzweil's comments because I stopped listening shortly after he
started to respond.
I disagree with the view point that the mind is a semantic processor
in the way Searle was arguing against computational theories of mind.
As a Christian I have no problem with the idea that there is a lot
that we do not understand about conscious experience and I think that
the explanations of how consciousness works may require theories about
matter (and energy) that are unimaginable to us now. However, the idea
that the basis of natural intelligence is not a syntactic
computational device (of some sort) is too far out there for me. I
also noticed that his distinction between epistemological knowledge
and ontological knowledge is excessive. I think it is very useful to
make that kind of distinction and I mean that it is very important to
be able to think about that. But on the other hand the idea that
epistemological knowledge cannot be used to encode ontological
knowledge is so far removed from commonsense that I have to reject the
idea that there is some kind of absolute distinction.
Jim Bromer
On Wed, Jan 20, 2016 at 5:14 PM, EdFromNH . <[email protected]> wrote:
Jim Bromer,
I only listened to about 15 minutes a the start of the video (it seemed
similar to previous speeches I have heard from him), and skipped ahead to
hear Kurzweil questioning him about 40 minutes in, and then quit after
hearing his response to Ray.
One of Searle's main mistakes is his claim that digital computers are
syntactical, that humans think largely semantically (which I agree with),
and that syntactical computation can never compute semantics (which I
disagree with). One of the major philosophical advancements in
understanding cognitive computing is that through grounding with massive
experientially connected experiential data syntax can, in fact, compute
semantics. The advances being made in deep learning strongly support
this.
For example, deep learning indicates the visual meaning of a concept such
as
"cat", with all of its rich possible visual variations can be understood
by
what Searle calls a syntactical system. If deep learning systems for
vision
were connected with deep learning systems for hearing, touch, emotions,
goals, behaviors, etc, the combined system would have even a much richer
understanding of the meaning of a word such as "cat".
So Searle's thinking is deeply flawed. But Searle's notion that
consciousness requires computation having qualities shared by biological
brains that are not shared by current computers, even current deep
learning
systems, is not clearly wrong.
Ed Porter
P.S. Jim, If you get this message. please given me a brief ping to say
you
have. For the last several months all of my posts to [email protected]
have
been returned with an error message. I have emailed Ben directly to ask
what the problem is and he has not responded. I am trying to determine
if I
have been ejected from this list because I dared to ask to publicly
debate
with Ben about his dismissal of my Computational Awarenes Theory of
Consciousness, or if there is some technical error.
On Mon, Jan 18, 2016 at 5:22 PM, Jim Bromer <[email protected]> wrote:
John Searle: "Consciousness in Artificial Intelligence" | Talks at
Google.
I just started listening to it, but it is interesting. He starts with
epistemic objectivity and ontological subjectivity (but promises to
avoid using too many polysyllabic words.)
https://www.youtube.com/watch?v=rHKwIYsPXLg
Jim Bromer
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/27953441-c5c84d1c
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/27953440-b0c9dfd5
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com