Hi Bruno Marchal
I'm still trying to figure out how numbers and ideas fit
into Leibniz's metaphysics. Little is written about this issue,
so I have to rely on what Leibniz says otherwise about monads.
Previously I noted that numbers could not be monads because
monads constantly change. Another argument against numbers
being monads is that all monads must be attached to corporeal
bodies. So monads refer to objects in the (already) created world,
whose identities persist, while ideas and numbers are not
created objects.
While numbers and ideas cannot be monads, they have to
be are entities in the mind, feelings, and bodily aspects
of monads. For Leibniz refers to the intellect of human
monads. And similarly, numbers and ideas must be used
in the fictional construction of matter-- in the bodily
aspect of material monads, as well as the construction
of our bodies and brains.
Roger Clough, rclo...@verizon.net
9/30/2012
Forever is a long time, especially near the end. -Woody Allen
- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-29, 10:29:23
Subject: Re: questions on machines, belief, awareness, and knowledge
On 29 Sep 2012, at 14:43, Evgenii Rudnyi wrote:
On 24.09.2012 18:23 meekerdb said the following:
On 9/24/2012 2:07 AM, Bruno Marchal wrote:
On 23 Sep 2012, at 18:33, Evgenii Rudnyi wrote:
On 23.09.2012 16:51 Bruno Marchal said the following:
On 23 Sep 2012, at 09:31, Evgenii Rudnyi wrote:
On 22.09.2012 22:49 meekerdb said the following:
...
In the past, Bruno has said that a machine that
understands transfinite induction will be conscious. But
being conscious and intelligent are not the same thing.
Brent
In my view this is the same as epiphenomenalism. Engineers
develop a robot to achieve a prescribed function. They do not
care about consciousness in this respect. Then consciousness
will appear automatically but the function developed by
engineers does not depend on it. Hence epiphenomenalism seems
to apply.
Not at all. Study UDA to see why exactly, but if comp is
correct, consciousness is somehow what defines the physical
realities, making possible for engineers to build the machines,
and then consciousness, despite not being programmable per se,
does have a role, like relatively speeding up the computations.
Like non free will, the epiphenomenalism is only
apparent because you take the outer god's eyes view, but
with comp, there is no matter, nor consciousness, at that
level, and we have no access at all at that level (without
assuming comp, and accessing it intellectually, that is only
arithmetic).
This is hard to explain if you fail to see the
physics/machine's psychology/theology reversal. You are still
(consciously or not) maintaining the physical supervenience
thesis, or an aristotelian ontology, but comp prevents this to
be possible.
Bruno,
I have considered a concrete case, when engineers develop a
robot, not a general one. For such a concrete case, I do not
understand your answer.
I have understood Brent in such a way that when engineers develop
a robot they must just care about functionality to achieve and
they can ignore consciousness at all. Whether it appears in the
robot or not, it is not a business of engineers. Do you agree
with such a statement or not?
In my defense, I only said that the engineers could develop
artificial intelligences without considering consciousnees. I didn't
say they *must* do so, and in fact I think they are ethically bound
to consider it. John McCarthy has already written on this years ago.
And it has nothing to do with whether supervenience or comp is true.
In either case an intelligent robot is likely to be a conscious being
and ethical considerations arise.
Dear Bruno and Brent,
Frankly speaking I do not quite understand you answers. When I try
to convert your thoughts to some guidelines for engineers developing
robots, I get only something like as follows.
1) When you make your design, do not care about consciousness, just
implement functions required.
2) When a robot is ready, it may have consciousness. We have not a
clue how to check if it has it but you must consider ethical
implications (say shutting a robot down may be equivalent to a
murder).
Evgenii
P.S. In my view 1) and 2) implies epiphenomenolism for consciousness.
If consciousness is epiphenomenal, how could matter be explained
through a theory of consciousness/first person, as this is made
obligatory when we assume that we are machines?
I remind you that things go in this way, if we are machine:
number === consciousness === matter
(and only then: matter === human consciousness === human notion of
number. That might explains the confusion)
I assume some basic understanding of the FPI and the UDA here. (FPI =
first person