On 25 May 2015, at 02:06, Jason Resch wrote:
On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal <[email protected]>
wrote:
On 23 May 2015, at 17:07, Jason Resch wrote:
On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <[email protected]>
wrote:
On 19 May 2015, at 15:53, Jason Resch wrote:
On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <[email protected]
> wrote:
On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote:
>
>
> On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <[email protected]
>
> wrote:
>>
>> On 19 May 2015 at 11:02, Jason Resch <[email protected]>
wrote:
>>
>> > I think you're not taking into account the level of the
functional
>> > substitution. Of course functionally equivalent silicon and
functionally
>> > equivalent neurons can (under functionalism) both instantiate
the same
>> > consciousness. But a calculator computing 2+3 cannot
substitute for a
>> > human
>> > brain computing 2+3 and produce the same consciousness.
>>
>> In a gradual replacement the substitution must obviously be at
a level
>> sufficient to maintain the function of the whole brain.
Sticking a
>> calculator in it won't work.
>>
>> > Do you think a "Blockhead" that was functionally equivalent
to you (it
>> > could
>> > fool all your friends and family in a Turing test scenario
into thinking
>> > it
>> > was intact you) would be conscious in the same way as you?
>>
>> Not necessarily, just as an actor may not be conscious in the
same way
>> as me. But I suspect the Blockhead would be conscious; the
intuition
>> that a lookup table can't be conscious is like the intuition
that an
>> electric circuit can't be conscious.
>>
>
> I don't see an equivalency between those intuitions. A lookup
table has a
> bounded and very low degree of computational complexity: all
answers to all
> queries are answered in constant time.
>
> While the table itself may have an arbitrarily high information
content,
> what in the software of the lookup table program is there to
> appreciate/understand/know that information?
Understanding emerges from the fact that the lookup table is
immensely
large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.
The lookup table is intelligent or at least offers the appearance
of intelligence, but it makes the maximum possible advantage of
the space-time trade off: http://en.wikipedia.org/wiki/Space–
time_tradeoff
The tin-can Turing machine is unbounded in its potential
computational complexity, there's no reason to be a bio- or silico-
chauvinist against it. However, by definition, a lookup table has
near zero computational complexity, no retained state.
But it is counterfactually correct on a large range spectrum. Of
course, it has to be infinite to be genuinely counterfactual-correct.
But the structure of the counterfactuals is identical regardless of
the inputs and outputs in its lookup table. If you replaced all of
its outputs with random strings, would that change its
consciousness? What if there existed a special decoding book, which
was a one-time-pad that could decode its random answers? Would the
existence of this book make it more conscious than if this book did
not exist? If there is zero information content in the outputs
returned by the lookup table it might as well return all "X"
characters as its response to any query, but then would any program
that just returns a string of "X"'s be conscious?
A lookup table might have some primitive conscious, but I think any
consciousness it has would be more or less the same regardless of
the number of entries within that lookup table. With more entries,
its information content grows, but it's capacity to process,
interpret, or understand that information remains constant.
You can emulate the brain of Einstein with a (ridiculously large)
look-up table, assuming you are ridiculously patient---or we slow
down your own brain so that you are as slow as "einstein".
Is that incarnation a zombie?
Again, with comp, all incarnations are "zombie", because bodies do
not think. It is the abstract person which thinks, and in this case
"Einstein" will still be defined by the "simplest normal"
computations, which here, and only here, have taken the form of that
unplausible giant "Einstein look-up table" emulation at the right
level.
That last bit is the part I have difficulty with. How can a a single
call to a lookup table ever be at the right level.
Actually, the Turing machine formalism is a type of look-up table: if
you are scanning input i (big numbers describing all your current
sensitive entries, while you are in state
q_169757243685173427379910054234647572376400064994542424646334345787910190034
....676754100687. (big number describing one of your many possible
mental state, then change the state into q_888..99 and look what next.
By construction that system behaves self-referentially correctly, so
it points on the relevant person/history, ... The hoistory of that
program makes no sense, so it is only in the case of its rare
incarnation in thought experience, that I endow it as much
consciousness than to the brain (which means none to the brain-body,
and all to the immaterial person related to some programs executed in
the relative plausible brain.
It seems to be with only one lookup, the table must always operate
(by definition) at the highest level, which is probably not low
enough to be at the right level.
It is hard to imagine such a look up table could be, even to implement
a mostico, it will escapes the multiverse.
Litlle self-changing programs reflecting the neighborhood are more
practical.
Consciousness is not in the lower level activity of the runner, but in
more high level autonomous circular loop, with a complex layers of
universal machine in between the high and lower levels, in the brain.
The brain filter and singularize consciousness relatively to a set of
stories.
On the other hand, if we are talking about using lookup tables to
implement and, or, nand, not, etc. then I can see a CPU based on
lookup tables for these low level operations be used to implement a
conscious program. The type of lookup table I have a problem
ascribing intelligence to is the type Ned Block described as being
capable of passing a Turing test, which received text inputs and
returned text answers, using a single function of the lookup table.
But those are improbable. Also, I don't believe in the Turing test,
except as the only mean to really judge, but this means also that I
could look at the machine's eyes, and have long term relationships. I
don't believe in the finite Turing test, but some transfinite might do.
Such look-up table are white rabbits themselves, and none has the
power of Löbian machine, per se. That they can be mimicked by some
entities, as long as they are correct, they incarnate the same person
in Platonia, and that is what count, in life, and in the measure
calculus.
To take a finer grain version of it, let's say the look table was
filled with Einstein's brain at time T (as some Plank time) and the
output of the look table was Einstein's brain at time t+1 (the next
Plank time). Would iterating recursively over this table, really be
a computation invoking Einstein's consciousness, or would it just be
a slightly more sophisticated playback of a movie?
It would be equivalent, which again suggest we better not associate
consciousness with the physical activity, or with the lower universal
system implementing them. But then, below our substitution level, we
must find the trace of the "parallel" computations, and QM might
suggest that this is indeed the case, although the math have to
confirm this (which they do, apparently already at the propositional
level).
Does an ant trained to perform the look table's operation become
more aware when placed in a vast library than when placed on a
small bookshelf, to perform the identical function?
Are you not doing the Searle's level confusion?
I see the close parallel, but I hope not. The input to the ant when
interpreted as a binary string is a number, that tells the ant how
many pages to walk past to get to the page containing the answer,
where the ant stops the paper is read. I don't see how this system
consisting of the ant, and the library, is conscious. The system is
intelligent, in that it provides meaningful answers to queries, but
it processes no information besides evaluating the magnitude of an
input (represented as a number) and then jumping to that offset to
read that memory location. Can there be consciousness in a simple
"A implies B" relation?
One second second emulation of the brain of Einstein, might need one
billions consultations of the loo-up table, if the brain is supposed
to be emulated at the right substitution level. The inputs might
need to be enough precise, and a human body have many inputs in
short interval.
To make the look-up table, you will need to do most of the
computations, and the task of building that look-up table program,
if done well, will just makes it possible for Einstein to manifiest
his consciousness (in Platonia) to be enacted relatively to you. But
the consciousness is not in the body, nor in the look-up table, but
in the person "Einstein" whose first person point of view lives at
the intersection of some truth and some beliefs. The beliefs are
representable ("programmable), but not the truth itself.
But what of the truth that one program decides to emulate another.
If M1 emulates M2 believing p. It is still only M2 which believes p,
not M1.
Two sub-programs You and the instance of Einstein's brain can be
part of a larger Platonic program, which is just one of the infinite
programs below your substitution level.
Yes.
You choosing to activate the Einstein brain emulation isn't
meaningless, as would change the measure for Einstein's brain and
possibly effect the probabilities for its consistent extensions.
Would it not?
Yes.
There is after all a reason to get out of bed, and to repair one's
brain with a digital computer (if necessary). If deciding to take a
digital prosthesis makes a difference to you, won't the doctor
instantiating Einstien's brain make a difference for Einstein?
Yes, of course. As long as a machine represent correctly the platonic
person, which is what turing machine can do when assuming comp, then
the first person (conscious) experience will be able to manifest
itself relatively to the most probable computations in which the
emulation is done.
The consciousness (if there is one) is the consciousness of the
person, incarnated in the program. It is not the consciousness of
the low level processor, no more than the physicality which
supports the ant and the table.
Again, with comp there is never any problem with all of this. The
consciousness is an immaterial attribute of an immaterial program/
machine's soul, which is defined exclusively by a class of true
number relations.
While I can see certain very complex number relations leading to a
human-level consciousness, I don't find that kind of complexity
present in the relations defining a lookup table. Especially
because any meaning or interpretation of the output depends on the
person querying it, there's no self-contained understanding of the
program's own output.
Why? When you get the output, you need to re-entry it, and ask the
look-up table again. It will works only because we suppose armies of
daemon having already done the computations.
Determining the answers the first time might require computations
that lead to consciousness, but later invocations of the stored
memory in the lookup table doesn't lead to those original
computations being performed again. It is just a memory access.
No, because you agree that the system remains counterfactually
correct, so it is not just a memory access, there is a conditional
which is satisfied by the process, and indeed, if the loop-up table is
miniaturized and put in the brain, with an army of super-fast little
daemons managing it in "real time", the person will pass the infinite
Turing test, so, why not bet (correctly here by construction) that it
manifests the correct platonic person?
Again, it just mean that only person are conscious, not processes, nor
computations, programs, machines, or anything 3p describable.
That is what is given with the "& p" hypostases. They describe the
logic of something not nameable by the machine itself, but which
directly concerns the machine selves, and its consistent extensions.
The person is defined by its truth and beliefs and relation in between
truth and beliefs, from the different person points of view (defined
in the Theaetetus' manner ([]p, []p & p, etc.).
Bruno
Again, like with the random neuron, even emulating a planaria for
two seconds at a low level by a look-up table will need a table
vastly bigger that the observable physical universe.
By construction, we keep the conutarfactual correctness, and the
planaria, and Einstein, behaves exactly like they would do with a
normal representation/body. They are unaware that they have been
implemented by a ridiculously giant look-up table, and that the
processor can take a long time finding the entry of the table for
this or that inputs. Entries are *very* numerous.
The task of a 3p machine consists only in associating that
consciousness to your local reality, but the body of the machine,
or whatever 3p you can associate to the machine, is not conscious,
and, to be sure, does not even exist as such.
I am aware it is hard to swallow, but there is no contradiction (so
far). And to keep comp, and avoid attributing a mind or worst a
"partial" mind to people without brain, or to movie (which handles
only very simple computations (projections)), I don't see any other
option (but fake magic).
It is perhaps helpful to see that this reversal makes a theory like
Robinson arithmetic into a TOE, and to start directly with it.
In that case all what we deal with is defined in term of
arithmetical formula, that is, in term of 0, s, + and *.
The handling of the difference between object and their description
is made explicit, through the coding, or Gödel-numbering, or
programming, of the object concerned.
For example, in the combinators, the number 0 is some times defined
by the combinator SKK, the expression "SKK" can be represented by a
Gödel number (in many different way), attributing to 0 a rather big
number representing its definition, and this distinguish well 0 and
its representation (which will be a rather big number). Proceeding
in this way, we avoid the easy confusions between the object level
and the metalevel, and we can even mixed them in a clean way, as
the metalevel embeds itself at the object level (which is what made
Gödel and Löb's arithmetical self-reference possible to start with).
I would have believed this almost refute comp, if there were not
that quantum-Everett confirmation of that admittedly shocking self-
diffraction. (which is actually nothing compared to the one with
information elimination or dissociation, needed for a "semantic of
first person dying" in that self-diffracting reality).
We can't know the truth, but we can study the consequence of comp
about that, and what machines can say about that, and justify, not
justify, expresses or not expresses, hope or fear, etc.
I appreciated the reversal, I suppose my line of questioning should
be interpreted as what classes of programs in arithmetic/RA/
platonia have the capacity to support or add measure to first
person views of the type humans seem to have.
*All* the universal programs, and the lobian programs can deduce the
laws of physics from that, as the measure one can be defined by the
logic to which []p & <>t obeys, with p sigma_1. Physical means
provable (true in all my consistent extension ([]p)), computable (UD-
accessible, p is sigma_1) and this in the case there is a consistent
extension (<>t).
That most naive and simple idea seems to work already. Quantization
appears also with []p & p, and []p & <>t & p.
Incompleteness provides the nuances giving sense to the antic
definitions of the greeks and indians (and others).
The winner might be quantum invariant on some geometrical structure.
There might be a very symmetrical, and non Turing universal bottom
playing a role. Machine theology is in its infancy (to say the least).
Bruno
Jason
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to everything-
[email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.