On 27 May 2015, at 09:00, Jason Resch wrote:



On Wed, May 27, 2015 at 12:31 AM, Pierz <[email protected]> wrote:


On Wednesday, May 27, 2015 at 11:27:26 AM UTC+10, Jason wrote:


On Mon, May 25, 2015 at 1:46 PM, Bruno Marchal <[email protected]> wrote:

On 25 May 2015, at 02:06, Jason Resch wrote:



On Sun, May 24, 2015 at 3:52 AM, Bruno Marchal <[email protected]> wrote:

On 23 May 2015, at 17:07, Jason Resch wrote:



On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <[email protected]> wrote:

On 19 May 2015, at 15:53, Jason Resch wrote:



On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <[email protected]> wrote:
On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote:
>
>
> On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <[email protected]>
> wrote:

>>
>> On 19 May 2015 at 11:02, Jason Resch <[email protected]> wrote:
>>
>> > I think you're not taking into account the level of the functional >> > substitution. Of course functionally equivalent silicon and functionally >> > equivalent neurons can (under functionalism) both instantiate the same >> > consciousness. But a calculator computing 2+3 cannot substitute for a
>> > human
>> > brain computing 2+3 and produce the same consciousness.
>>
>> In a gradual replacement the substitution must obviously be at a level >> sufficient to maintain the function of the whole brain. Sticking a
>> calculator in it won't work.
>>
>> > Do you think a "Blockhead" that was functionally equivalent to you (it
>> > could
>> > fool all your friends and family in a Turing test scenario into thinking
>> > it
>> > was intact you) would be conscious in the same way as you?
>>
>> Not necessarily, just as an actor may not be conscious in the same way >> as me. But I suspect the Blockhead would be conscious; the intuition >> that a lookup table can't be conscious is like the intuition that an
>> electric circuit can't be conscious.
>>
>
> I don't see an equivalency between those intuitions. A lookup table has a > bounded and very low degree of computational complexity: all answers to all
> queries are answered in constant time.
>
> While the table itself may have an arbitrarily high information content,
> what in the software of the lookup table program is there to
> appreciate/understand/know that information?

Understanding emerges from the fact that the lookup table is immensely
large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.



The lookup table is intelligent or at least offers the appearance of intelligence, but it makes the maximum possible advantage of the space-time trade off: http://en.wikipedia.org/wiki/Space– time_tradeoff

The tin-can Turing machine is unbounded in its potential computational complexity, there's no reason to be a bio- or silico-chauvinist against it. However, by definition, a lookup table has near zero computational complexity, no retained state.

But it is counterfactually correct on a large range spectrum. Of course, it has to be infinite to be genuinely counterfactual- correct.


But the structure of the counterfactuals is identical regardless of the inputs and outputs in its lookup table. If you replaced all of its outputs with random strings, would that change its consciousness? What if there existed a special decoding book, which was a one-time-pad that could decode its random answers? Would the existence of this book make it more conscious than if this book did not exist? If there is zero information content in the outputs returned by the lookup table it might as well return all "X" characters as its response to any query, but then would any program that just returns a string of "X"'s be conscious?

A lookup table might have some primitive conscious, but I think any consciousness it has would be more or less the same regardless of the number of entries within that lookup table. With more entries, its information content grows, but it's capacity to process, interpret, or understand that information remains constant.

You can emulate the brain of Einstein with a (ridiculously large) look-up table, assuming you are ridiculously patient---or we slow down your own brain so that you are as slow as "einstein".
Is that incarnation a zombie?

Again, with comp, all incarnations are "zombie", because bodies do not think. It is the abstract person which thinks, and in this case "Einstein" will still be defined by the "simplest normal" computations, which here, and only here, have taken the form of that unplausible giant "Einstein look-up table" emulation at the right level.


That last bit is the part I have difficulty with. How can a a single call to a lookup table ever be at the right level.

Actually, the Turing machine formalism is a type of look-up table: if you are scanning input i (big numbers describing all your current sensitive entries, while you are in state q_169757243685173427379910054234647572376400064994542424646334345787910190034 ....676754100687. (big number describing one of your many possible mental state, then change the state into q_888..99 and look what next.

By construction that system behaves self-referentially correctly, so it points on the relevant person/history, ... The hoistory of that program makes no sense, so it is only in the case of its rare incarnation in thought experience, that I endow it as much consciousness than to the brain (which means none to the brain-body, and all to the immaterial person related to some programs executed in the relative plausible brain.




It seems to be with only one lookup, the table must always operate (by definition) at the highest level, which is probably not low enough to be at the right level.

It is hard to imagine such a look up table could be, even to implement a mostico, it will escapes the multiverse. Litlle self-changing programs reflecting the neighborhood are more practical.

Consciousness is not in the lower level activity of the runner, but in more high level autonomous circular loop, with a complex layers of universal machine in between the high and lower levels, in the brain. The brain filter and singularize consciousness relatively to a set of stories.




On the other hand, if we are talking about using lookup tables to implement and, or, nand, not, etc. then I can see a CPU based on lookup tables for these low level operations be used to implement a conscious program. The type of lookup table I have a problem ascribing intelligence to is the type Ned Block described as being capable of passing a Turing test, which received text inputs and returned text answers, using a single function of the lookup table.

But those are improbable. Also, I don't believe in the Turing test, except as the only mean to really judge, but this means also that I could look at the machine's eyes, and have long term relationships. I don't believe in the finite Turing test, but some transfinite might do.


Such look-up table are white rabbits themselves, and none has the power of Löbian machine, per se. That they can be mimicked by some entities, as long as they are correct, they incarnate the same person in Platonia, and that is what count, in life, and in the measure calculus.



To take a finer grain version of it, let's say the look table was filled with Einstein's brain at time T (as some Plank time) and the output of the look table was Einstein's brain at time t+1 (the next Plank time). Would iterating recursively over this table, really be a computation invoking Einstein's consciousness, or would it just be a slightly more sophisticated playback of a movie?

It would be equivalent, which again suggest we better not associate consciousness with the physical activity, or with the lower universal system implementing them. But then, below our substitution level, we must find the trace of the "parallel" computations, and QM might suggest that this is indeed the case, although the math have to confirm this (which they do, apparently already at the propositional level).









Does an ant trained to perform the look table's operation become more aware when placed in a vast library than when placed on a small bookshelf, to perform the identical function?

Are you not doing the Searle's level confusion?

I see the close parallel, but I hope not. The input to the ant when interpreted as a binary string is a number, that tells the ant how many pages to walk past to get to the page containing the answer, where the ant stops the paper is read. I don't see how this system consisting of the ant, and the library, is conscious. The system is intelligent, in that it provides meaningful answers to queries, but it processes no information besides evaluating the magnitude of an input (represented as a number) and then jumping to that offset to read that memory location. Can there be consciousness in a simple "A implies B" relation?

One second second emulation of the brain of Einstein, might need one billions consultations of the loo-up table, if the brain is supposed to be emulated at the right substitution level. The inputs might need to be enough precise, and a human body have many inputs in short interval.

To make the look-up table, you will need to do most of the computations, and the task of building that look-up table program, if done well, will just makes it possible for Einstein to manifiest his consciousness (in Platonia) to be enacted relatively to you. But the consciousness is not in the body, nor in the look-up table, but in the person "Einstein" whose first person point of view lives at the intersection of some truth and some beliefs. The beliefs are representable ("programmable), but not the truth itself.



But what of the truth that one program decides to emulate another.

If M1 emulates M2 believing p. It is still only M2 which believes p, not M1.




Two sub-programs You and the instance of Einstein's brain can be part of a larger Platonic program, which is just one of the infinite programs below your substitution level.

Yes.


You choosing to activate the Einstein brain emulation isn't meaningless, as would change the measure for Einstein's brain and possibly effect the probabilities for its consistent extensions. Would it not?

Yes.




There is after all a reason to get out of bed, and to repair one's brain with a digital computer (if necessary). If deciding to take a digital prosthesis makes a difference to you, won't the doctor instantiating Einstien's brain make a difference for Einstein?

Yes, of course. As long as a machine represent correctly the platonic person, which is what turing machine can do when assuming comp, then the first person (conscious) experience will be able to manifest itself relatively to the most probable computations in which the emulation is done.






The consciousness (if there is one) is the consciousness of the person, incarnated in the program. It is not the consciousness of the low level processor, no more than the physicality which supports the ant and the table.

Again, with comp there is never any problem with all of this. The consciousness is an immaterial attribute of an immaterial program/ machine's soul, which is defined exclusively by a class of true number relations.


While I can see certain very complex number relations leading to a human-level consciousness, I don't find that kind of complexity present in the relations defining a lookup table. Especially because any meaning or interpretation of the output depends on the person querying it, there's no self-contained understanding of the program's own output.


Why? When you get the output, you need to re-entry it, and ask the look-up table again. It will works only because we suppose armies of daemon having already done the computations.

Determining the answers the first time might require computations that lead to consciousness, but later invocations of the stored memory in the lookup table doesn't lead to those original computations being performed again. It is just a memory access.

No, because you agree that the system remains counterfactually correct, so it is not just a memory access, there is a conditional which is satisfied by the process, and indeed, if the loop-up table is miniaturized and put in the brain, with an army of super-fast little daemons managing it in "real time", the person will pass the infinite Turing test, so, why not bet (correctly here by construction) that it manifests the correct platonic person?

Again, it just mean that only person are conscious, not processes, nor computations, programs, machines, or anything 3p describable. That is what is given with the "& p" hypostases. They describe the logic of something not nameable by the machine itself, but which directly concerns the machine selves, and its consistent extensions.

The person is defined by its truth and beliefs and relation in between truth and beliefs, from the different person points of view (defined in the Theaetetus' manner ([]p, []p & p, etc.).

Bruno



But are not computations something different beyond mere inputs and outputs of functions? It is like Putnam's objection to functionalism: there are multiple ways of realizing each function, and they are not necessarily equivalent. I think once one admits that the inputs and outputs are not all that matters, this leads to abandoning functionalism for computationalism, which also necessitates the concept of a substitution level.

Where I see lookup tables fail is that they seem to operate above the probable necessary substation level. (Despite having the same inputs/outputs at the higher levels).

But your memoization example still makes a good point - namely that some computations can be bypassed in favour of recordings, yet presumably this doesn't lead to fading qualia. We don't need anything as silly as a gigantic lookup table of all possible responses. We only need to acknowledge that we can store the results of recordings of computations we've already completed, and that this should not result in any strange degradation of consciousness.


Degradation, no, but my question pertains to running the memorized brain multiple times. Does that affect the measure of their conscious moment in the same way rerunning the entire computation does?


Normally, it affects the measure only if it leads to more duplication in the future. To be sure, this is not explained by AUDA, as it leads to solve more complex technical questions. (I can handle today only the measure 1 and 0, which is enough to get the logic of the yes/no experiments, and to show it to be a quantum logic).

Bruno



Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to