Hi Jesse Mazer 

I admire Penrose and what he is courageously trying to do
(pointing out the deficiiencies of materialism, although
he still protects himself by calling himself a humanist
and an atheist).

But both Penrose and Chalmers are still stuck with the
intractable cartesian mind/body dichotomy adopted by modern
science, who simply ignore that the dichotomy exists.

There is some hiope that they might see the virtues of quantum monadology,
which adopts Leibniz's Idealism. 

Too much to ask, I suppose, but I see no other way.     
My only wishful consolution is Max Planck's quip that science advances
one funeral at a time. But at the same time its stagnation
(not moving ahead) lies not in conservatism, whose fundamental
criteria are logic, truth and facts, but because of liberal's clever
tactic of stigmatizing any mention of God or employing
any philosophy but materialism.

So our culture is gradually IMHO degenerating into materialism and atheism.

Roger Clough, rclo...@verizon.net
8/24/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
----- Receiving the following content ----- 
From: Jesse Mazer 
Receiver: everything-list 
Time: 2012-08-23, 21:09:46
Subject: Re: A remark on Richard's paper





On Thu, Aug 23, 2012 at 8:41 PM, Richard Ruquist <yann...@gmail.com> wrote:

Jesse,


This is what Chalmers says in the 95 paper you link about the second Penrose 
argument, the one in my paper:


" 3.5 As far as I can determine, this argument is free of the obvious flaws 
that plague other G?elian arguments, such as Lucas's argument and Penrose's 
earlier arguments. If it is flawed, the flaws lie deeper. It is true that the 
argument has a feeling of achieving its conclusion as if by magic. One is 
tempted to say: "why couldn't F itself engage in just the same reasoning?". But 
although there are various directions in which one might try to attack the 
argument, no knockdown refutation immediately presents itself. For this reason, 
the argument is quite challenging. Compared to previous versions, this argument 
is much more worthy of attention from supporters of AI.?"


Chalmers finally concludes that the flaw for Godel, which Penrose also assumed, 
is the assumption that we can know we are sound. So the other way around, if 
Godel is correct, so is the Penrose second argument, which Chalmers confirmed. 
However, Chalmers seems to be saying the Godel is incorrect, hardly a basis for 
my paper.


What do you mean "the flaw for Godel"? There is no doubt that Godel's 
mathematical proof is correct, and if you think Chalmers is suggesting any such 
doubt in his paper you are misreading him. The argument he's talking about is 
one specifically concerning human intelligence, which Godel's mathematical 
proof says nothing about (Godel did offer some brief comments about the 
implications of his mathematical proof for human intelligence, but they were 
very brief and somewhat ambiguous, see http://www.iep.utm.edu/lp-argue/#H4 ). 
And I already quoted his conclusions about the second argument, after the 
section you quote above: that although Chalmers agrees that Penrose's second 
argument does show that *either* our reasoning cannot be captured by a formal 
system *or* that we cannot be sure our reasoning is sound, Chalmers thinks 
Penrose is wrong to prefer the first option rather than the second.


?


Personally, when I am sound, I know I am sound. When I am unsound I usually 
know that I am unsound. However, psychosis runs in my family, and many times I 
have watched a relative lapse into psychosis without him realizing it.


Chalmers/Penrose aren't talking about "sound" in the ordinary colloquial sense 
of sanity or anything like that, they're talking about soundness in the sense 
of perfect mathematical certainty that there is absolutely no chance--not even 
a chance of 1 in 10^1000000000 or smaller, say--that they might have made an 
error in their judgement about the truth or falsity of some (potentially very 
complicated) proposition about arithmetic.






Besides I sent the paper to Chalmers and he had no problem with. But he did 
wish me luck getting it published. He knew something I had not yet learned.
Richard




Did Chalmers offer any detailed commentary suggesting he had read through the 
whole thing carefully? If not it's possible he skimmed it and missed that 
sentence, or just read the abstract and decided it didn't interest him, but 
sent the note out of politeness.


Jesse


?



On Thu, Aug 23, 2012 at 8:19 PM, Jesse Mazer <laserma...@gmail.com> wrote:

A quibble with the beginning of Richard's paper. On the first page it says:


'It is beyond the scope of this paper and admittedly beyond my understanding to 
delve into G?elian logic, which seems to be self-referential proof by 
contradiction, except to mention that Penrose in Shadows of the Mind(1994), as 
confirmed by David Chalmers(1995), arrived at a seemingly valid 7 step proof 
that human ?easoning powers cannot be captured by any formal system?.'


If you actually read Chalmers' paper 
at?http://web.archive.org/web/20090204164739/http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.html
 he definitely does *not* "confirm" Penrose's argument! He says in the paper 
that Penrose has two basic arguments for his conclusions about consciousness, 
and at the end of the section titled "the first argument" he concludes that the 
first one fails:


"2.16 It is section 3.3 that carries the burden of this strand of Penrose's 
argument, but unfortunately it seems to be one of the least convincing sections 
in the book. By his assumption that the relevant class of computational systems 
are all straightforward axiom-and-rules system, Penrose is not taking AI 
seriously, and certainly is not doing enough to establish his conclusion that 
physics is uncomputable. I conclude that none of Penrose's argument up to this 
point put a dent in the natural AI position: that our reasoning powers may be 
captured by a sound formal system F, where we cannot determine that F is sound."


Then when dealing with Penrose's "second argument", he says that Penrose draws 
the wrong conclusions; where Penrose concludes that our reasoning cannot be the 
product of any formal system, Chalmers concludes that the actual issue is that 
we cannot be 100% sure our reasoning is "sound" (which I understand to mean we 
can never be 100% sure that we have not made a false conclusion about whether 
all the propositions we have proved true or false actually have that 
truth-value in "true arithmetic"):


"3.12 We can see, then, that the assumption that we know we are sound leads to 
a contradiction. One might try to pin the blame on one of the other 
assumptions, but all these seem quite straightforward. Indeed, these include 
the sort of implicit assumptions that Penrose appeals to in his arguments all 
the time. Indeed, one could make the case that all of premises (1)-(4) are 
implicitly appealed to in Penrose's main argument. For the purposes of the 
argument against Penrose, it does not really matter which we blame for the 
contradiction, but I think it is fairly clear that it is the assumption that 
the system knows that it is sound that causes most of the damage. It is this 
assumption, then, that should be withdrawn.


"3.13 Penrose has therefore pointed to a false culprit. When the contradiction 
is reached, he pins the blame on the assumption that our reasoning powers are 
captured by a formal system F. But the argument above shows that this 
assumption is inessential in reaching the contradiction: A similar 
contradiction, via a not dissimilar sort of argument, can be reached even in 
the absence of that assumption. It follows that the responsibility for the 
contradiction lies elsewhere than in the assumption of computability. It is the 
assumption about knowledge of soundness that should be withdrawn.


"3.14 Still, Penrose's argument has succeeded in clarifying some issues. In a 
sense, it shows where the deepest flaw in G?elian arguments lies. One might 
have thought that the deepest flaw lay in the unjustified claim that one can 
see the soundness of certain formal systems that underlie our own reasoning. 
But in fact, if the above analysis is correct, the deepest flaw lies in the 
assumption that we know that we are sound. All G?elian arguments appeal to this 
premise somewhere, but in fact the premise generates a contradiction. Perhaps 
we are sound, but we cannot know unassailably that we are sound."


So it seems Chalmers would have no problem with the "natural AI" position he 
discussed earlier, that our reasoning could be adequately captured by a 
computer simulation that did not come to its top-level conclusions about 
mathematics via a strict axiom/proof method involving the mathematical 
questions themselves, but rather by some underlying fallible structure like a 
neural network. The bottom-level behavior of the simulated neurons themselves 
would be deducible given the initial state of the system using the axiom/proof 
method, but that doesn't mean the system as a whole might not make errors in 
mathematical calculations; see Douglas Hofstadter's discussion of this issue 
starting on p. 571 of "Godel Escher Bach", the section titled "Irrational and 
Rational Can Coexist on Different Levels", where he writes:


"Another way to gain perspective on this is to remember that a brain, too, is a 
collection of faultlessly functioning element-neurons. Whenever a neuron's 
threshold is surpassed by the sum of the incoming signals, BANG!-it fires. It 
never happens that a neuron forgets its arithmetical knowledge-carelessly 
adding its inputs and getting a wrong answer. Even when a neuron dies, it 
continues to function correctly, in the sense that its components continue to 
obey the laws of mathematics and physics. Yet as we all know, neurons are 
perfectly capable of supporting high-level behavior that is wrong, on its own 
level, in the most amazing ways. Figure 109 is meant to illustrate such a class 
of levels: an incorrect belief held in the software of a mind, supported by the 
hardware of a faultlessly functioning brain."


Figure 109 depicts the outline of a person's head with "2+2=5" appearing inside 
it, but the symbols in "2+2=5" are actually made up of large collections of 
smaller mathematical equations, like "7+7=14", which are all correct. A nice 
way of illustrating the idea, I think.?


I came up with my own thought-experiment to show where Penrose's argument goes 
wrong, based on the same conclusion that Chalmers reached: a community of 
"realistic" AIs whose simulated brains work similarly to real human brains 
would never be able to be 100% certain that they had not reached a false 
conclusion about arithmetic, and the very act of stating confidently in 
mathematical that they would never reach a wrong conclusion would ensure that 
they were endorsing a false proposition about arithmetic. See my discussion 
with LauLuna on the "Penrose and algorithms" thread here: 
http://groups.google.com/group/everything-list/browse_thread/thread/c92723e0ef1a480c/429e70be57d2940b?#429e70be57d2940b


Jesse?


On Thu, Aug 23, 2012 at 6:38 PM, Stephen P. King <stephe...@charter.net> wrote:

Dear Richard,

?? Your paper is very interesting. It reminds me a lot of Stephen Wolfram's 
cellular automaton theory. I only have one big problem with it. The 10d 
manifold would be a single fixed structure that, while conceivably capable of 
running the computations and/or implementing the Peano arithmetic, has a 
problem with the role of time in it. You might have a solution to this problem 
that I see that I did not deduce as I read your paper. How do you define time 
for your model?

-- 
Onward!

Stephen

"Nature, to be commanded, must be obeyed." 
~ Francis Bacon
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to