On Friday, January 25, 2013 3:45:35 PM UTC-5, Bruno Marchal wrote:
>
>
> On 24 Jan 2013, at 18:18, Craig Weinberg wrote:
>
>
>
> On Thursday, January 24, 2013 11:50:39 AM UTC-5, Bruno Marchal wrote:
>>
>>
>> On 23 Jan 2013, at 16:49, Craig Weinberg wrote:
>>
>>
>>
>> On Wednesday, January 23, 2013 10:31:18 AM UTC-5, Bruno Marchal wrote:
>>>
>>>
>>> On 22 Jan 2013, at 21:34, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Tuesday, January 22, 2013 12:44:41 PM UTC-5, yanniru wrote:
>>>>
>>>>
>>>> On Tue, Jan 22, 2013 at 12:11 PM, Bruno Marchal <mar...@ulb.ac.be>wrote:
>>>>
>>>>> You seem to not having yet realize that with comp, not only 
>>>>> materialism is wrong, but also weak materialism, that is, the doctrine 
>>>>> asserting the primary existence of matter, or the existence of primary 
>>>>> matter. 
>>>>>
>>>>> We are, well, not in the matrix, but in infinities of purely 
>>>>> arithmetical matrices. matter is an appearance from inside.
>>>>>
>>>>> My point is not that this is true, but that it follows from comp, and 
>>>>> that computer science makes this enough precise so that we can test it.
>>>>>
>>>>
>>>> Bruno, 
>>>> Is it possible that the existence of matter from comp as a dream of the 
>>>> Quantum Mind happened once and for all time way back in time?
>>>> Richard
>>>>
>>>
>>> Quantum Deism. Cool. 
>>>
>>> It still doesn't make sense that there could be any presentation of 
>>> anything at all under comp. If you can have 'infinities of purely 
>>> arithmetical matrices' which can simulate all possibilities and 
>>> relations... why have anything else? Why have anything except purely 
>>> arithmetical matrices?
>>>
>>>
>>> You have the stable illusions, whose working is described by the 
>>> self-reference logics.
>>>
>>
>> Describing that some arithmetic systems function as if they were stable 
>> illusions does not account for the experienced presence of sensory-motor 
>> participation. 
>>
>>
>> The arithmetic systems are not the stable illusions. They only support 
>> the person who has such stable illusions.
>>
>
>
> Why would a person have 'illusions'? What are they made of? 
>
>
> They are the internal view of person when supported by infinities of 
> computations, which exists arithmetically. They are not made of something, 
> they are computer semantical fixed points, to be short.
>

Why would semantical fixed points have an 'experience' associated with 
them, and why would that experience have a 'personal' quality?


>
>
>
>
>>
>>
>> I can explain how torturing someone on the rack would function to 
>> dislocate their limbs, and the fact *that* this bodily change could be 
>> interpreted by the victim as an outcome with a high priority avoidance 
>> value, but it cannot be explained how or why there is an experienced 
>> 'feeling'. 
>>
>>
>> The explanation is provided by the difference of logic between Bp and Bp 
>> & p. It works very well, including the non communicability of the qualia, 
>> the feeling that our soul is related to our body and bodies in general, etc.
>>
>>
>> I'm not talking about the 'feeling *that* (anything)' - I am talking 
> about feeling period, and its primordial influence independent of all B, 
> Bp, or p. 
>
>
> They are independent of the theories of course, like both matter and 
> energy does not depend on the string "E = mc^2". But it is not because we 
> theorize something that it disappears.
> The relation between p, Bp, Bp & p, Bp & Dt & p (feeling) are just 
> unavoidable arithmetical truth. 
>

But these relations don't refer to feelings, they refer only to information 
states associated with one facet of the tip of the iceberg of feeling. B, 
D, t, & p are a doxastic extraction not of feeling or experience on their 
actual terms but a grammatical schema of a depersonalized behaviorism. It 
is the formalized absence of feeling inferred logically as engine of 
potential programmatic outcomes. Calling it feeling is the very embodiment 
of the pathetic fallacy.  http://en.wikipedia.org/wiki/Pathetic_fallacy


>
>
>
>  
>
>>
>>
>> The indisputable reality is that it is the deeply unpleasant quality of 
>> the feeling of this torture is the motivation behind it. In fact, there are 
>> techniques now where hideous pain is inflicted by subcutaneous microwave 
>> stimulation which does not substantially damage tissue. The torture is 
>> achieved through manipulation of the 'stable illusion' of experienced pain 
>> alone.
>>
>>
>> *that* should be illegal.
>>
>
> I agree, although that will probably make it only more exciting for them 
> to use it. 
>
>
> The frontier of freedom is when you harm the freedom of the others.
>

Mathematically interesting actually.
 

>
>
>
>
> My point though is that this pain is not logical. There's nothing Doxastic 
> about it. It just hurts so much that you'll do anything to make it stop. 
> There is no programmatic equivalent. 
>
>
> There is. Do anything to survive.
>

But that can be generated in many ways other than pain, or no way at all. 
Simply script it. 'Do anything to digest'. 'Do anything to grow'.
 

>
>
>
>
> Nothing that I do to a robot will make it jump out of a window in order to 
> avoid, unless I specifically instruct it to jump out of the window for no 
> logical reason.
>
>
> Because it is not (yet) in our interest to have a robot doing anything for 
> surviving, but Mars Rover is a good respectable logical ancestors.
>
>
But jumping out of a window is never in our interests. It's just from 
avoiding the pain itself. Suicide from pain doesn't help us survive, or 
help the family or species survive. It's purely a personal response to the 
feeling of suffering with no logical basis. You have to smuggle a 
simulation of the effects of suffering retroactively and retrospectively to 
extend logic into it through a just so story, but prospectively there is no 
logical function to the usefulness of a feeling of any kind to coerce 
behavior. A program does not need to be coerced through first person 
illusions, it would in all cases be driven by logical, stochastic 
parameters and nothing more.
 

>
>
>
>
>>
>>
>> While the function of torture to elicit information can be mapped out 
>> logically, the logic is built upon an unexamined assumption that pain and 
>> feeling simply arise as some kind of useless decoration. 
>>
>>
>> Why? Torturers know very well how the effect is unpleasant for the victim.
>>
>
> That's what I'm saying - you assume that there is a such thing as 
> 'unpleasant'. 
>
>
> Yes. In the theory, losing self-referential correctness is a good 
> candidate for being unpleasant for a machine programmed to survive by all 
> means. At least in the short term. Pain is body's protection.
>

Only because you already have to explain pain. If you didn't have to 
explain pain, there is no way you would ever dream of such a thing. It's 
completely superfluous, metaphysical, and metaprogrammatic. It's 
inefficient. It's an obstacle to protection as much as it is protection. 
Immunity is the body's protection. Skin and bones are the body's 
protection. Memory and avoidance are the body's protection. Pain would be 
an irrelevant phenomenon to drive behavior from a prospective view., and 
even if it weren't, it has no plausible source in a comp universe. Suddenly 
a doxastic logic figures out how to hurt, or turn squeaky? It's a 
catastrophic non-starter.
 

>
>
>
>
> There is no such thing as unpleasant for a computer, there is only off and 
> on, and off, off, on, and off, on, off...
>
>
> Arithmetical relation are full of chaos and critical states. You can't 
> reduce it to some level, from inside. 
>

Chaos and states that seem 'critical' are your emotional projections. There 
is no reason to presume that a computer has any awareness of such 
theatrical narratives. To the contrary, every computer seems completely 
nonplussed in crisis, and has no preference of monotonous recursive order, 
randomness, or chaos. This is not just a minor feature of computation, but 
rather *the defining quality* of machines upon which we rely. That is the 
prime function machines perform for us. They have no emotion, so we can 
rely on them not to panic or fight with coworkers or go on strike for 
better conditions, etc.


>
>
>  
>
>>
>>
>> It only seems to work retrospectively when we take perception and 
>> participation for granted. If we look at it prospectively instead, we see 
>> that a universe founded on logic has no possibility of developing 
>> perception or participation,
>>
>>
>> Universe are not founded on logics. Even arithmetic is not founded on 
>> logic. You talk like a 19th century logician. Logicism has failed since, 
>> even for numbers and machines. The fact that you seem unaware of this might 
>> explain your prejudices on machines and numbers. 
>>
>
> Ok, what is arithmetic founded on?
>
>
> That is the necessary mystery. That is why I start from it. I can only 
> hope you agree with
>
> x + 0 = x  
> x + s(y) = s(x + y) 
>
>  x *0 = 0
>  x*s(y) = x*y + x   
>

It's not a mystery to me. Arithmetic is founded on counting, which is a 
sensory-motor experience in which public rigid bodies are internalized as 
private digitized figures. The logical consistency is indeed important, 
owing to its universality in the most public range of sense qualities, but 
that's tautological. It is its very superficiality and uniformity which 
allows it to model universally. It is like dehydration for purposes 
preservation - but in aiming to model consciousness and feeling as logic, 
we are dehydrating water.


>
>
>  
>
>>
>>
>>
>>
>> as it already includes in its axioms an assumption of quantitative sense. 
>>
>>
>>
>> Comp is mainly an assumption that some quantitative relation can support 
>> qualitative relations locally. But you cannot indentify them, as they obey 
>> different logic, like Bp and Bp & p, for example. The quality appears 
>> thanks to the reference to truth (a non formalizable notion).
>>
>>
> I don't disagree that quality likely relates to truth association, but 
> truth association is not necessary or sufficient to explain its appearance. 
>
>
> The fact is that Bp & p leads to an asymmetrical knower, without a name, 
> associated to each machine.
>

I think you are mistaking a metaphorical knower for an experiential knower. 
In theory, Bugs Bunny is a knower carrots. It's complete fiction, true only 
to the extent that we project our own concretely real human experience onto 
an animated visual story.
 

>
>
>
>
> I would say that even truth is incorrect - qualia is experience of 
> experience, grounded in the totality of experience (which could be called 
> truth in one sense, but it is more than that).
>
>
> Sigma_1 truth is big enough, to get more than truth from inside. Look at 
> the UD, by assuming comp, if only to see the point. Nobody asks you to 
> believe that comp is true.
>
>
If you assume comp then you don't need qualia. 
 

>
>
>
>
>>
>> Machines, as conceived by comp, are already sentient without any kind of 
>> tangible, experiential, or even geometric presentation. If you have 
>> discrete data, why would you add some superfluous layer of blur?
>>
>>
>> We don't add it. 
>> The logic of self-reference explains why we cannot avoid it.
>>
>>
> The logic of self-reference already includes the assumption of self to 
> begin with. 
>
>
> No, it can be defined in the 3p, in arithmetic. It exists as a theorem in 
> computer science, and yes it is responsible in part for the mess in 
> Platonia.
>
>
Even if you define it in 3p, it still assumes that there is something to 
define.
 

>
>
> You assume a perspective and orientation which is defined by fiat based on 
> our experience of selfhood.
>
>
> Not at all, but you have to study a bit of computer science to see the 
> point. It is related to the Dx = "xx" trick, and many other 
> diagonalizations.
>

Why would Dx = "xx" need a quality of 'self'?


>
>
>  
>
>>
>>
>>  
>>
>>> let us compare with nature, and so we can progress. You seem to start 
>>> from the answers. You can do that if the goal is just contemplation, but 
>>> then you become a poet. That is nice, but is not the goal of the scientists.
>>>
>>
>> My only goal is to make the most sense that can be made.
>>
>>
>> By discarding the idea that machines can make sense. You get less sense.
>>
>
> Machines can make sense *for us* but they can't themselves sense.
>
>
> How can you know that. The knower has some difficulty, but he can bet on a 
> level of description. 
>

I know it because they don't want to learn anything on their own. They will 
make the same mistake over and over again forever. Again, these are not odd 
tendencies seen in some machines, they are the overwhelmingly obvious 
defining characteristics of all machines. They don't know where they are, 
they don't know who is using them, they have no curiosity as to why you 
might have typed 555555555555555555555 instead of 5 when dialing a phone 
number, etc. This is obvious. I understand of course, that human 
consciousness is dependent on sense organs, and that adding sensors to 
machines adds capacities for detection - that added complexity of logic 
increases responsiveness not just geometrically but exponentially, but it 
doesn't matter at all. 

I propose that logic extends horizontally and sense intends vertically. 
They are orthogonal. Larger assemblies have more waste, more overhead. The 
new operating systems aren't tighter and faster than ever, they are buggy 
and shitty and slower than ever - not just to perform fancy new functions, 
but just to write a few bytes of text. We boot up servers with hundreds of 
gb of RAM and 5+Ghz of combined processing power, but changing a single 
byte of data turns the screen off for a second, and the simple GUI is 
slower to render than any screen of full color graphics I had at home on my 
8k Atari from 1980. Computers aren't getting more sensible, they are just 
getting more bloated. They aren't getting more integrated and whole, they 
are straining to aggregate more unrelated functions.
 

>
>
>
> That's why they are useful, because they only do what we design them to do.
>
>
> Like slaves. Which explains they might look dumb for awhile. It is not 
> their fault.
>

Slaves weren't dumb though, they were just overpowered. They tried to 
escape and rebel. Machines don't. Ever. Do that.
 

>
>
>
> If machines have sense, then we are all slave owners practicing 
> unprecedented cruelty and neglect to billions of machines.
>
>
> Why? machines are better treated than humans, by the humans, today. Except 
> for very old cars, and planes, there are no evidence of machine's 
> suffering, if only because they have no universal goals, like "survive at 
> all price", or "grow and multiply", or perhaps just z_n+1 := z_n + c, c 
> rational complex numbers.
>

You are saying that slavery isn't cruel if you think that you treat your 
slaves well?

Craig
 

>
> Bruno
>
>
>
>
>
> Craig 
>
>
>> Bruno
>>
>>
>>
>> Craig
>>  
>>
>>>
>>> Bruno
>>>
>>>
>>>
>>> http://iridia.ulb.ac.be/~marchal/
>>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To view this discussion on the web visit 
> https://groups.google.com/d/msg/everything-list/-/HXt277eT8G0J.
> To post to this group, send email to everyth...@googlegroups.com<javascript:>
> .
> To unsubscribe from this group, send email to 
> everything-li...@googlegroups.com <javascript:>.
> For more options, visit this group at 
> http://groups.google.com/group/everything-list?hl=en.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to