On Sun, May 24, 2015 at 12:40 AM, Pierz <[email protected]> wrote:

>
>
> On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote:
>>
>>
>>
>> On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <[email protected]> wrote:
>>
>>>
>>> On 19 May 2015, at 15:53, Jason Resch wrote:
>>>
>>>
>>>
>>> On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <[email protected]
>>> > wrote:
>>>
>>>> On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote:
>>>> >
>>>> >
>>>> > On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <
>>>> [email protected]>
>>>> > wrote:
>>>>
>>>> >>
>>>> >> On 19 May 2015 at 11:02, Jason Resch <[email protected]> wrote:
>>>> >>
>>>> >> > I think you're not taking into account the level of the functional
>>>> >> > substitution. Of course functionally equivalent silicon and
>>>> functionally
>>>> >> > equivalent neurons can (under functionalism) both instantiate the
>>>> same
>>>> >> > consciousness. But a calculator computing 2+3 cannot substitute
>>>> for a
>>>> >> > human
>>>> >> > brain computing 2+3 and produce the same consciousness.
>>>> >>
>>>> >> In a gradual replacement the substitution must obviously be at a
>>>> level
>>>> >> sufficient to maintain the function of the whole brain. Sticking a
>>>> >> calculator in it won't work.
>>>> >>
>>>> >> > Do you think a "Blockhead" that was functionally equivalent to you
>>>> (it
>>>> >> > could
>>>> >> > fool all your friends and family in a Turing test scenario into
>>>> thinking
>>>> >> > it
>>>> >> > was intact you) would be conscious in the same way as you?
>>>> >>
>>>> >> Not necessarily, just as an actor may not be conscious in the same
>>>> way
>>>> >> as me. But I suspect the Blockhead would be conscious; the intuition
>>>> >> that a lookup table can't be conscious is like the intuition that an
>>>> >> electric circuit can't be conscious.
>>>> >>
>>>> >
>>>> > I don't see an equivalency between those intuitions. A lookup table
>>>> has a
>>>> > bounded and very low degree of computational complexity: all answers
>>>> to all
>>>> > queries are answered in constant time.
>>>> >
>>>> > While the table itself may have an arbitrarily high information
>>>> content,
>>>> > what in the software of the lookup table program is there to
>>>> > appreciate/understand/know that information?
>>>>
>>>> Understanding emerges from the fact that the lookup table is immensely
>>>> large. It could be wrong, but I don't think it is obviously less
>>>> plausible than understanding emerging from a Turing machine made of
>>>> tin cans.
>>>>
>>>>
>>>>
>>> The lookup table is intelligent or at least offers the appearance of
>>> intelligence, but it makes the maximum possible advantage of the space-time
>>> trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff
>>>
>>> The tin-can Turing machine is unbounded in its potential computational
>>> complexity, there's no reason to be a bio- or silico-chauvinist against it.
>>> However, by definition, a lookup table has near zero computational
>>> complexity, no retained state.
>>>
>>>
>>> But it is counterfactually correct on a large range spectrum. Of course,
>>> it has to be infinite to be genuinely counterfactual-correct.
>>>
>>>
>> But the structure of the counterfactuals is identical regardless of the
>> inputs and outputs in its lookup table. If you replaced all of its outputs
>> with random strings, would that change its consciousness? What if there
>> existed a special decoding book, which was a one-time-pad that could decode
>> its random answers? Would the existence of this book make it more conscious
>> than if this book did not exist? If there is zero information content in
>> the outputs returned by the lookup table it might as well return all "X"
>> characters as its response to any query, but then would any program that
>> just returns a string of "X"'s be conscious?
>>
>> I really like this argument, even though I once came up with a (bad)
> attempt to refute it. I wish it received more attention because it does
> cast quite a penetrating light on the issue. What you're suggesting is
> effectively the cache pattern in computer programming, where we trade
> memory resources for computational resources. Instead of repeating a
> resource-intensive computation, we store the inputs and outputs for later
> regurgitation.
>

How is this different from a movie recording of brain activity (which most
on the list seem to agree is not conscious)? The lookup table is just a
really long recording, only we use the input to determine to which section
of the recording to fast-forward/rewind to.



> The cached results 'store' intelligence in an analogous way to the storage
> of energy as potential energy. We effectively flatten out time (the
> computational process) into the spatial dimension (memory).
>

But what if intelligence were not used to create the lookup table? Does the
history/creation event, which may be arbitrarily far in the past, really
play any role in a present level of consciousness? Or the reverse: the
future creation of a one-time-pad, retroactively restores intelligence
(consciousness?) to what was previously always a dumb program.


> The cache pattern does not allow us to cheat the law that intelligent work
> must be done in order to produce intelligent results, it merely allows us
> to do that work at a time that suits us. The intelligence has been
> transferred into the spatial relationships built into the table,
> intelligent relationships we can only discover by doing the computations.
> The lookup table is useless without its index. So what your thought
> experiment points out is pretty fascinating: that intelligence can be
> manifested spatially as well as temporally, contrary to our common-sense
> intuition, and that the intelligence of a machine does not have to be in
> real time. That actually supports the MGA if anything - because
> computations are abstractions outside of time and space. We should not
> forget that the memory resources required to duplicate any kind of
> intelligent computer would be absolutely enormous, and the lookup table,
> although structurally simple, would embody just a vast amount of
> computational intelligence.
>
>
There is a common programming technique called memoization. Essentially
building automatic caches for functions within a program. I wonder: would
adding memorization to the functions implementing an AI eventually result
in it becoming a zombie recording rather than a program, if it were fed all
the same inputs a second time? Perhaps the recording of a counterfactual
relation retains its effectiveness. Perhaps its impossible to differentiate
the two and cases and ascribe consciousness to one but not the other.

Jason



>
>
>> A lookup table might have some primitive conscious, but I think any
>> consciousness it has would be more or less the same regardless of the
>> number of entries within that lookup table. With more entries, its
>> information content grows, but it's capacity to process, interpret, or
>> understand that information remains constant.
>>
>>
>>>
>>> Does an ant trained to perform the look table's operation become more
>>> aware when placed in a vast library than when placed on a small bookshelf,
>>> to perform the identical function?
>>>
>>>
>>> Are you not doing the Searle's level confusion?
>>>
>>
>> I see the close parallel, but I hope not. The input to the ant when
>> interpreted as a binary string is a number, that tells the ant how many
>> pages to walk past to get to the page containing the answer, where the ant
>> stops the paper is read. I don't see how this system consisting of the ant,
>> and the library, is conscious. The system is intelligent, in that it
>> provides meaningful answers to queries, but it processes no information
>> besides evaluating the magnitude of an input (represented as a number) and
>> then jumping to that offset to read that memory location. Can there be
>> consciousness in a simple "A implies B" relation?
>>
>>
>>>  The consciousness (if there is one) is the consciousness of the person,
>>> incarnated in the program. It is not the consciousness of the low level
>>> processor, no more than the physicality which supports the ant and the
>>> table.
>>>
>>> Again, with comp there is never any problem with all of this. The
>>> consciousness is an immaterial attribute of an immaterial program/machine's
>>> soul, which is defined exclusively by a class of true number relations.
>>>
>>>
>> While I can see certain very complex number relations leading to a
>> human-level consciousness, I don't find that kind of complexity present in
>> the relations defining a lookup table. Especially because any meaning or
>> interpretation of the output depends on the person querying it, there's no
>> self-contained understanding of the program's own output.
>>
>>
>>> The task of a 3p machine consists only in associating that consciousness
>>> to your local reality, but the body of the machine, or whatever 3p you can
>>> associate to the machine, is not conscious, and, to be sure, does not even
>>> exist as such.
>>>
>>> I am aware it is hard to swallow, but there is no contradiction (so
>>> far). And to keep comp, and avoid attributing a mind or worst a "partial"
>>> mind to people without brain, or to movie (which handles only very simple
>>> computations (projections)), I don't see any other option (but fake magic).
>>>
>>> It is perhaps helpful to see that this reversal makes a theory like
>>> Robinson arithmetic  into a TOE, and to start directly with it.
>>>
>>> In that case all what we deal with is defined in term of arithmetical
>>> formula, that is, in term of 0, s, + and *.
>>>
>>> The handling of the difference between object and their description is
>>> made explicit, through the coding, or Gödel-numbering, or programming, of
>>> the object concerned.
>>>
>>> For example, in the combinators, the number 0 is some times defined by
>>> the combinator SKK, the expression "SKK" can be represented by a Gödel
>>> number (in many different way), attributing to 0 a rather big number
>>> representing its definition, and this distinguish well 0 and its
>>> representation (which will be a rather big number). Proceeding in this way,
>>> we avoid the easy confusions between the object level and the metalevel,
>>> and we can even mixed them in a clean way, as the metalevel embeds itself
>>> at the object level (which is what made Gödel and Löb's arithmetical
>>> self-reference possible to start with).
>>>
>>> I would have believed this almost refute comp, if there were not that
>>> quantum-Everett confirmation of that admittedly shocking self-diffraction.
>>> (which is actually nothing compared to the one with information elimination
>>> or dissociation, needed for a "semantic of first person dying" in that
>>> self-diffracting reality).
>>>
>>> We can't know the truth, but we can study the consequence of comp about
>>> that, and what machines can say about that, and justify, not justify,
>>> expresses or not expresses, hope or fear, etc.
>>>
>>>
>> I appreciated the reversal, I suppose my line of questioning should be
>> interpreted as what classes of programs in arithmetic/RA/platonia have the
>> capacity to support or add measure to first person views of the type humans
>> seem to have.
>>
>> Jason
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to