On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal <[email protected]> wrote:

>
> On 23 May 2015, at 17:07, Jason Resch wrote:
>
>
>
> On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <[email protected]> wrote:
>
>>
>> On 19 May 2015, at 15:53, Jason Resch wrote:
>>
>>
>>
>> On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <[email protected]
>> > wrote:
>>
>>> On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote:
>>> >
>>> >
>>> > On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <
>>> [email protected]>
>>> > wrote:
>>> >>
>>> >> On 19 May 2015 at 11:02, Jason Resch <[email protected]> wrote:
>>> >>
>>> >> > I think you're not taking into account the level of the functional
>>> >> > substitution. Of course functionally equivalent silicon and
>>> functionally
>>> >> > equivalent neurons can (under functionalism) both instantiate the
>>> same
>>> >> > consciousness. But a calculator computing 2+3 cannot substitute for
>>> a
>>> >> > human
>>> >> > brain computing 2+3 and produce the same consciousness.
>>> >>
>>> >> In a gradual replacement the substitution must obviously be at a level
>>> >> sufficient to maintain the function of the whole brain. Sticking a
>>> >> calculator in it won't work.
>>> >>
>>> >> > Do you think a "Blockhead" that was functionally equivalent to you
>>> (it
>>> >> > could
>>> >> > fool all your friends and family in a Turing test scenario into
>>> thinking
>>> >> > it
>>> >> > was intact you) would be conscious in the same way as you?
>>> >>
>>> >> Not necessarily, just as an actor may not be conscious in the same way
>>> >> as me. But I suspect the Blockhead would be conscious; the intuition
>>> >> that a lookup table can't be conscious is like the intuition that an
>>> >> electric circuit can't be conscious.
>>> >>
>>> >
>>> > I don't see an equivalency between those intuitions. A lookup table
>>> has a
>>> > bounded and very low degree of computational complexity: all answers
>>> to all
>>> > queries are answered in constant time.
>>> >
>>> > While the table itself may have an arbitrarily high information
>>> content,
>>> > what in the software of the lookup table program is there to
>>> > appreciate/understand/know that information?
>>>
>>> Understanding emerges from the fact that the lookup table is immensely
>>> large. It could be wrong, but I don't think it is obviously less
>>> plausible than understanding emerging from a Turing machine made of
>>> tin cans.
>>>
>>>
>>>
>> The lookup table is intelligent or at least offers the appearance of
>> intelligence, but it makes the maximum possible advantage of the space-time
>> trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff
>>
>> The tin-can Turing machine is unbounded in its potential computational
>> complexity, there's no reason to be a bio- or silico-chauvinist against it.
>> However, by definition, a lookup table has near zero computational
>> complexity, no retained state.
>>
>>
>> But it is counterfactually correct on a large range spectrum. Of course,
>> it has to be infinite to be genuinely counterfactual-correct.
>>
>>
> But the structure of the counterfactuals is identical regardless of the
> inputs and outputs in its lookup table. If you replaced all of its outputs
> with random strings, would that change its consciousness? What if there
> existed a special decoding book, which was a one-time-pad that could decode
> its random answers? Would the existence of this book make it more conscious
> than if this book did not exist? If there is zero information content in
> the outputs returned by the lookup table it might as well return all "X"
> characters as its response to any query, but then would any program that
> just returns a string of "X"'s be conscious?
>
> A lookup table might have some primitive conscious, but I think any
> consciousness it has would be more or less the same regardless of the
> number of entries within that lookup table. With more entries, its
> information content grows, but it's capacity to process, interpret, or
> understand that information remains constant.
>
>
> You can emulate the brain of Einstein with a (ridiculously  large) look-up
> table, assuming you are ridiculously patient---or we slow down your own
> brain so that you are as slow as "einstein".
> Is that incarnation a zombie?
>
> Again, with comp, all incarnations are "zombie", because bodies do not
> think. It is the abstract person which thinks, and in this case "Einstein"
> will still be defined by the "simplest normal" computations, which here,
> and only here, have taken the form of that unplausible giant "Einstein
> look-up table" emulation at the right level.
>
>
That last bit is the part I have difficulty with. How can a a single call
to a lookup table ever be at the right level. It seems to be with only one
lookup, the table must always operate (by definition) at the highest level,
which is probably not low enough to be at the right level. On the other
hand, if we are talking about using lookup tables to implement and, or,
nand, not, etc. then I can see a CPU based on lookup tables for these low
level operations be used to implement a conscious program. The type of
lookup table I have a problem ascribing intelligence to is the type Ned
Block described as being capable of passing a Turing test, which received
text inputs and returned text answers, using a single function of the
lookup table.

To take a finer grain version of it, let's say the look table was filled
with Einstein's brain at time T (as some Plank time) and the output of the
look table was Einstein's brain at time t+1 (the next Plank time). Would
iterating recursively over this table, really be a computation invoking
Einstein's consciousness, or would it just be a slightly more sophisticated
playback of a movie?


>
>
>
>
>
>>
>> Does an ant trained to perform the look table's operation become more
>> aware when placed in a vast library than when placed on a small bookshelf,
>> to perform the identical function?
>>
>>
>> Are you not doing the Searle's level confusion?
>>
>
> I see the close parallel, but I hope not. The input to the ant when
> interpreted as a binary string is a number, that tells the ant how many
> pages to walk past to get to the page containing the answer, where the ant
> stops the paper is read. I don't see how this system consisting of the ant,
> and the library, is conscious. The system is intelligent, in that it
> provides meaningful answers to queries, but it processes no information
> besides evaluating the magnitude of an input (represented as a number) and
> then jumping to that offset to read that memory location. Can there be
> consciousness in a simple "A implies B" relation?
>
>
> One second second emulation of the brain of Einstein, might need one
> billions consultations of the loo-up table, if the brain is supposed to be
> emulated at the right substitution level. The inputs might need to be
> enough precise, and a human body have many inputs in short interval.
>
> To make the look-up table, you will need to do most of the computations,
> and the task of building that look-up table program, if done well, will
> just makes it possible for Einstein to manifiest his consciousness (in
> Platonia) to be enacted relatively to you. But the consciousness is not in
> the  body, nor in the look-up table, but in the person "Einstein" whose
> first person point of view lives at the intersection of some truth and some
> beliefs. The beliefs are representable ("programmable), but not the truth
> itself.
>
>
>
But what of the truth that one program decides to emulate another. Two
sub-programs You and the instance of Einstein's brain can be part of a
larger Platonic program, which is just one of the infinite programs below
your substitution level. You choosing to activate the Einstein brain
emulation isn't meaningless, as would change the measure for Einstein's
brain and possibly effect the probabilities for its consistent extensions.
Would it not?

There is after all a reason to get out of bed, and to repair one's brain
with a digital computer (if necessary). If deciding to take a digital
prosthesis makes a difference to you, won't the doctor instantiating
Einstien's brain make a difference for Einstein?


>
>
>
>>  The consciousness (if there is one) is the consciousness of the person,
>> incarnated in the program. It is not the consciousness of the low level
>> processor, no more than the physicality which supports the ant and the
>> table.
>>
>> Again, with comp there is never any problem with all of this. The
>> consciousness is an immaterial attribute of an immaterial program/machine's
>> soul, which is defined exclusively by a class of true number relations.
>>
>>
> While I can see certain very complex number relations leading to a
> human-level consciousness, I don't find that kind of complexity present in
> the relations defining a lookup table. Especially because any meaning or
> interpretation of the output depends on the person querying it, there's no
> self-contained understanding of the program's own output.
>
>
>
> Why? When you get the output, you need to re-entry it, and ask the look-up
> table again. It will works only because we suppose armies of daemon having
> already done the computations.
>

Determining the answers the first time might require computations that lead
to consciousness, but later invocations of the stored memory in the lookup
table doesn't lead to those original computations being performed again. It
is just a memory access.


> Again, like with the random neuron, even emulating a planaria for two
> seconds at a low level by a look-up table will need a table vastly bigger
> that the observable physical universe.
> By construction, we keep the conutarfactual correctness, and the planaria,
> and Einstein, behaves exactly like they would do with a normal
> representation/body. They are unaware that they have been implemented by a
> ridiculously giant look-up table, and that the processor can take a long
> time finding the entry of the table for this or that inputs. Entries are
> *very* numerous.
>
>
>

>
>
>
>
>> The task of a 3p machine consists only in associating that consciousness
>> to your local reality, but the body of the machine, or whatever 3p you can
>> associate to the machine, is not conscious, and, to be sure, does not even
>> exist as such.
>>
>> I am aware it is hard to swallow, but there is no contradiction (so far).
>> And to keep comp, and avoid attributing a mind or worst a "partial" mind to
>> people without brain, or to movie (which handles only very simple
>> computations (projections)), I don't see any other option (but fake magic).
>>
>> It is perhaps helpful to see that this reversal makes a theory like
>> Robinson arithmetic  into a TOE, and to start directly with it.
>>
>> In that case all what we deal with is defined in term of arithmetical
>> formula, that is, in term of 0, s, + and *.
>>
>> The handling of the difference between object and their description is
>> made explicit, through the coding, or Gödel-numbering, or programming, of
>> the object concerned.
>>
>> For example, in the combinators, the number 0 is some times defined by
>> the combinator SKK, the expression "SKK" can be represented by a Gödel
>> number (in many different way), attributing to 0 a rather big number
>> representing its definition, and this distinguish well 0 and its
>> representation (which will be a rather big number). Proceeding in this way,
>> we avoid the easy confusions between the object level and the metalevel,
>> and we can even mixed them in a clean way, as the metalevel embeds itself
>> at the object level (which is what made Gödel and Löb's arithmetical
>> self-reference possible to start with).
>>
>> I would have believed this almost refute comp, if there were not that
>> quantum-Everett confirmation of that admittedly shocking self-diffraction.
>> (which is actually nothing compared to the one with information elimination
>> or dissociation, needed for a "semantic of first person dying" in that
>> self-diffracting reality).
>>
>> We can't know the truth, but we can study the consequence of comp about
>> that, and what machines can say about that, and justify, not justify,
>> expresses or not expresses, hope or fear, etc.
>>
>>
> I appreciated the reversal, I suppose my line of questioning should be
> interpreted as what classes of programs in arithmetic/RA/platonia have the
> capacity to support or add measure to first person views of the type humans
> seem to have.
>
>
> *All* the universal programs, and the lobian programs can deduce the laws
> of physics from that, as the measure one can be defined by the logic to
> which []p & <>t obeys, with p sigma_1. Physical means provable (true in all
> my consistent extension ([]p)), computable (UD-accessible, p is sigma_1)
> and this in the case there is a consistent extension (<>t).
>
> That most naive and simple idea seems to work already. Quantization
> appears also with []p & p, and []p & <>t & p.
>
> Incompleteness provides the nuances giving sense to the antic definitions
> of the greeks and indians (and others).
>
> The winner might be quantum invariant on some geometrical structure. There
> might be a very symmetrical, and non Turing universal bottom playing a
> role. Machine theology is in its infancy (to say the least).
>
> Bruno
>
>
> Jason
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to