On Sat, Jul 2, 2011 at 4:57 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:

>
> On 01 Jul 2011, at 13:23, selva kumar wrote:
>
>  Is consciousness causally effective ?
>>
>> I found this question in previous threads,but I didn't find a answer.
>>
>
> Was it in the FOR list (on the book Fabric of reality by David Deutsch) ? I
> thought I did answer this question, which is a very imprtant and fundamental
> question.
>
> It is also a tricky question, which is very similar or related to the
> question of free-will, and it can lead to vocabulary issue. I often defend
> the idea that consciousness is effective. Indeed the role I usually defend
> for consciousness is a relative self-speeding up ability. Yet the question
> is tricky, especially due to the presence of the "causally", which is harder
> to grasp or define than "consciousness" itself.
>
> Let me try to explain. For this I need some definition, and I hope for some
> understanding of the UDA and a bit of AUDA. Ask precision if needed.
>
> The main ingredient for the explanation are three theorems due to Gödel:
>
> - the Gödel completeness theorem (available for machine talking first order
> logic or a sufficiently effective higher order logic). The theorem says that
> a theory or machine is consistent (syntactical notion, = ~Bf) iff the theory
> has a model (a mathematical structure in which it makes sense to say that a
> proposition is true). I will rephrase this by saying that a machine is
> consistent if and only if the machine's beliefs make sense in some reality.
>
> - the Gödel second incompleteness theorem ~Bf -> ~B(~Bf): if the machine is
> consistent, then this is not provable by the machine. So if the beliefs are
> real in some reality, the machine cannot prove the existence of that
> reality. This is used in some strict way, because we don't assume the
> machine can prove its completeness (despite this has shown to be the case by
> Orey). This entails that eventually, the machine can add as new axiom its
> own consistency, but this leads to a new machine, for which a novel notion
> of consistency appears, and the 'new' machine can still not prove the
> existence of a reality "satisfying its beliefs. yet that machine can easily
> prove the consistency of the machine she was. This can be reitered as many
> times as their are (constructive) ordinals, and this is what I describe as a
> climbing from G to G*. The modal logic of self-reference remains unchanged,
> but the arithmetical interpretation of it expands. An infinity of previously
> undecidable propositions become decidable, and ... another phenomenon
> occurs:
>
> - Gödel length of proof theorem. Once a machine adds an undecidable
> proposition, like its own consistency, as a new axiom/belief, not only an
> infinity of (arithmetical) propositions become decidable, but an infinity of
> already provable propositions get shorter proofs. Indeed, and amazingly
> enough, for any number x, we can find a proposition which proofs will be x
> times shorter than its shorter proof in the beliefs system without the
> undecidable proposition. A similar, but not entirely equivalent theorem is
> true for universal computation ability, showing in particular that there is
> no bound to the rapidity of computers, and this just by change of the
> software (alas, with finite numbers of exceptions in the *effective*
> self-speeding up: but evolution of species needs not to be effective or
> programmable in advance).
>
> Extrapolating this and working this on human-machine,consider this..
If we firmly believe that all our proofs and instincts on mathematical
truths are correct,will we get shorter proofs ? Now, this turns into a proof
for existence of power of belief..(?).
Also,speaking in a strict way,it means If you believe you are
intelligent,then you become more intelligent (which is in immediate
contracdiction with godel's second incompleteness theorem and your smallest
theory on intelligence )

> Now I suggest to (re)define consciousness as a machine (instinctive,
> preprogrammed) ability to bet on a reality. This is equivalent (stricto
> sensu: the machine does not need to know this) to an ability of betting its
> own consistency (excluding that very new axiom to avoid inconsistency). As a
> universal system, this will speed-up the machine relatively to the probable
> local universal system(s) and will in that way augment its freedom degree.
> If two machines play ping-pong, the machine which is quicker has a greater
> range of possible moves/strategy than its opponent.
>
> So the answer to the question "is consciousness effective" would be yes, if
> you accept such definition.
>
> Is that consciousness *causally* effective? That is the tricky part related
> to free will. If you accept the definition of free will that I often
> suggested, then the answer is yes. Causality will have its normal "physical
> definition", except that with comp such physicalness is given by an
> arithmetical quantization (based on the material hypostase defined by Bp &
> Dp): p physically causes q, iff something like BD(BDp -> BDq). I recall Dp =
> ~B ~p. But of course, in God eyes, there is only true (and false) number
> relations. The löbian phenomenon then shows that the consciousness
> self-speeding up is coupled with the building of the reality that the
> machine bet on. At that level, it is like if consciousness is the main
> force, perhaps the only original one, in the physical universe! This needs
> still more work to make precise enough. There is a complex tradeoff in
> between the "causally" and the "effective" at play, I think.
>
> I hope this was not too technical. The work of Gödel plays a fundamental
> role. This explanation is detailed in "Conscience et Mécanisme", and related
> more precisely to the inference inductive frame.
>
> To sum up: machine consciousness, in the theory, confers self-speeding up
> abilities to the machine with respect to the most probable
> continuation/universal-**machine. It is obviously something useful for
> self-moving creature: to make them able to anticipate and avoid obstacles,
> which would explain why the self-moving creatures have developed
> self-reflexive brains, and become Löbian (self-conscious). Note that here
> the role is attributed to self-consciousness, and not really to
> consciousness. But you need consciousness to have self-consciousness.
> Consciousness per se has no role, like in pure contemplation, but once
> reflected in the Löbian way, it might be the stronger causally effective
> force operating in the 'arithmetical truth', the very origin of the (self)
> acceleration/force.
>

Why do you always limit the definition of consciousness(atleast machine
consciousness) to its ability to learn alone ?
why shouldn't free-will and sensory experiences(qualia,if you believe in
it)be part (rather than being a consequence or precondition) of
consciousness itself ? In the absence of consciousness,there is indeed
absence of free-will and experiencing qualia.
In that case,we can't prove that a universal machine is conscious.

>
> Note that the Gödel speed-up theorem is not hard to prove. There is a
> simple proof of it in the excellent book by Torkel Franzen "Gödel's theorem
> An Incomplete Guide To Its Use and Abuse" which I recommend the reading
> (despite it is more on the abuses than the uses). The original paper is in
> the book by Davis: the undecidable (republished in Dover), and which I
> consider as a bible for "machine's theology".
>
> Bruno
>
> http://iridia.ulb.ac.be/~**marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.**com<everything-list@googlegroups.com>
> .
> To unsubscribe from this group, send email to everything-list+unsubscribe@
> **googlegroups.com <everything-list%2bunsubscr...@googlegroups.com>.
> For more options, visit this group at http://groups.google.com/**
> group/everything-list?hl=en<http://groups.google.com/group/everything-list?hl=en>
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to