On 27 Sep 2014, at 00:13, meekerdb wrote:

On 9/26/2014 11:05 AM, Bruno Marchal wrote:

On 25 Sep 2014, at 09:18, Russell Standish wrote:

On Wed, Sep 24, 2014 at 07:05:43PM -0700, meekerdb wrote:
On 9/24/2014 6:53 PM, John Clark wrote:

On Wed, Sep 24, 2014 at 5:52 PM, Telmo Menezes
<[email protected] <mailto:[email protected]>> wrote:


John argues that consciousness has real world consequences in terms of
         being evolutionary selected


Either that or consciousness is the side effect of something else that has
real world consequences; if Darwin was right it can't be any other way.


You keep saying this. You also like to say things like "consciousness is how
information feels when it's being processed". I like that idea. It shows that you can indeed consider alternatives to the binary choice above. In this case evolution created a very complex scenario for conscious to feel when being processed. But it
 did not create consciousness,


Evolution is only interested in intelligent behavior because only
that and not consciousness helps get genes into the next
generation. So how did consciousness manage to produce at least
one being (me) that's conscious? There are only 2 possibilities:

1) Perhaps consciousness aids in producing intelligent behavior.
If this is true then it would be easier to make a intelligent
computer that was conscious than to make a Intelligent computer
that was not conscious. It would also mean that the Turing Test is
not only a test for intelligence but was also a good (although not
infallible) test for consciousness too.

2) The only way to produce intelligent behavior is to process
information, and perhaps it's just a brute fact that consciousness
is how information feels when it's being processed.

In my opinion #2 is more likely than #1 but if Darwin was right
then one of the two must be true, But either way consciousness
must be a biological spandrel, and if you ever run across a smart
computer you can conclude that it's probably conscious too.

I think #1 is more likely, so long as we identify consciousness with
what we experience, e.g. imaging, inner narrative, language (does
anybody here think they could formulate and understand Lob's theorem
without language?).  #2 is is probably true in the sense that some
kind of consciousness goes with intelligent information processing.
But I think there are probably a wide range of different ways to do
intelligent information processing and they may give rise to
different kinds of consciousness (e.g. the hive mind of the Borg)
that would be hard for us to recognize in interacting with them.

Of course these are probably all equivalent under Bruno's idea that
consciousness is just being a universal computer and so babies and
trees and genome's are conscious too. But I think that's so broad a
concept of consciousness as to be obfuscatory.

Brent


My suspicion is also no. #1. Consciousness very likely is a strategy
for being able to bring together disparate, and perhaps contradictory
unconscious thought processes to make a decision for action - any
decision is often better than making one at all. This is essentially
Stephen Mithen's account of how the human mind formed (cathedrals of
the mind and all that). It also accords with Toffoli's integrated
information idea.

The trouble I have, is that there are obvious ways of achieving the
same ends that don't involves consciousness - eg voting (think of the
three computers controlling the space shuttle). What makes conscious
so much better than these other methods, or is it just an effective accident?


Consciousness gives the meaning, I would say. It is the basic religion in self-consistency, which already, when taken literally, leads already to a contradiction/catastrophe(*). When it works, it accelerates the self-development. It automatically leads to wander between enough security and enough liberty, which is natural in the "machine's unknown" (arithmetic truth, above sigma_1 up to analysis).

(*) I allude to the Rogerian sentence/machine which asserts his own consistency, and which is false.

A machine becomes inconsistent if she adds the axiom of its own consistency, including that very axiom, which can be build with the Dx = "xx" method (or using the second recursion theorem of Kleene). Yet, the machine, if consistent, get provability power by adding as *new* axiom that she "was" consistent.

What's the sense  of "was" here?  Without using the new axiom?

It is the difference between the theories (with bla-bla-bla and similar meant for Turing complete axioms or arithmetical axioms)

T1:

1. bla-bla-bla
2. blo-blo-blo
3. consistent (1 & 2 & 3)

and

T2:

1. bla-bla-bla
2. blo-blo-blo
3. consistent (1 & 2)

T1 exists, we can build it using the usual self-referential tool. T2 exists. But T1 is inconsistent, and T2 is much more powerful than the theory having only 1 and 2.

Note that the following theory T3:

1. bla-bla-bla
2. blo-blo-blo
3. inconsistent (1 & 2 & 3)

is consistent, but arithmetical unsound. Yet it also proves much more theorem than the simple "1 & 2", including interesting theorems (that is true Pi-I sentences, not just equivalent to []f or []<>t, that T3 proves too (T3 does not proves f))

The moral: handle self-consistency assertion with a lot of care, as it can lead you to inconsistency or unsoundness easily, yet it can make you more powerful (in speed, proving abilities, etc.).


Bruno









Brent

This makes the machine more powerful (= proving more theorem in arithmetic which were previously undecidable, and also squeeze (arbitrarily so) infinitely many proofs.

I think consciousness is a self- accelerator. It has been enhanced by the self-moving animals, which have to anticipate the preys and the predators.

For first order logic, by Gödel-Henkin completeness theorem, being consistant is equivalent with having a semantic (a model, an interpretation in the sense of models of the logicians).

Machines cannot justify the existence of model or semantic of themselves, but they can bet on their 3p-self and their relation with such models. Semantics well used shorten the work, badly use they lead to crashes.

Bruno

PS I have to go.



--

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics      [email protected]
University of New South Wales http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
       (http://www.hpcoders.com.au/AmoebasSecret.html)
----------------------------------------------------------------------------

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected] .
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to