Does the comp project use any synthetic logic ?
IMHO  synlog is the basis of worldly intelligence.

.
Analytic logic can tell us nothing new, so cannot be a 
basis alone for intelligence.


http://instruct.westvalley.edu/lafave/HUME.HTM
"Analytic and Synthetic
Analytic statements are a special class of a priori statements. In analytic 
statements, the predicate concept adds nothing to the subject concept, e.g., 
“Bachelors are unmarried,” or “The red house is red.”
Synthetic statements are a special class of a posteriori statements. In 
synthetic statements, the predicate concept adds something to the subject 
concept (the two concepts are synthesized), e.g., “The red house is owned by a 
dentist.”
Hume’s Fork
According to Hume, legitimate reasoning has just two possible kinds of subject 
matter:
1.           Relations of Ideas (e.g., math, logic)
or
2.           Matters of Fact (e.g., empirical matters).
Reasoning about relations of ideas is analytic and a priori.
Reasoning about matters of facts is synthetic and a posteriori.
For Hume, any legitimate statement is either analytic a priori or synthetic a 
posteriori. According to Hume, analytic a priori statements – the kind we use 
when we reason about relations of ideas – tell us nothing about the world; they 
tell us only about how we think and use language. 
Thus, according to Hume, the only statements than can tell us anything about 
the world are synthetic a posteriori. And according to Hume, if a statement is 
synthetic a posteriori, it must be grounded in impressions (sense data or 
passion). If no impressions support a synthetic statement, the statement is 
bogus superstition, and should be rejected.
In other words, Hume’s fork has two tines. Legitimate statements are either 
analytic a priori — like statements of math, which tell us nothing about the 
external world; or 
synthetic a posteriori — like statements about the world of the senses, 
supportable by impressions (sense data or passions). 
Thus, statements are either analytic a priori (in which case they tell us 
nothing about the world), OR they are synthetic a posteriori (in which case 
they must be supported by impressions). For Hume, there are no other legitimate 
possibilities. 
Hume’s fork means that statements about matters of fact always require 
empirical support; we can never “just know” them. This is why Hume criticizes 
the Ontological Argument, which attempts to prove that the claim “God exists” 
is true a priori. For Hume, no claim about existence can be a priori, since 
whether or not something exists is a matter of fact, and thus must be known a 
posteriori.
Hume’s Fork does not necessarily plunge us into skepticism about morality, 
since for Hume, morality is a matter of the passions, and passions are one of 
the sources of impressions. So to say “Stealing is wrong” simply means “I feel 
stealing is wrong”; but what if everybody feels the same way? Then morality is 
a set of objective facts about human feeling based on common human nature. "





Roger Clough, rclo...@verizon.net
8/24/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
----- Receiving the following content ----- 
From: Jesse Mazer 
Receiver: everything-list 
Time: 2012-08-23, 20:19:50
Subject: Re: A remark on Richard's paper


A quibble with the beginning of Richard's paper. On the first page it says: 


'It is beyond the scope of this paper and admittedly beyond my understanding to 
delve into G?elian logic, which seems to be self-referential proof by 
contradiction, except to mention that Penrose in Shadows of the Mind(1994), as 
confirmed by David Chalmers(1995), arrived at a seemingly valid 7 step proof 
that human ?easoning powers cannot be captured by any formal system?.'


If you actually read Chalmers' paper 
at?http://web.archive.org/web/20090204164739/http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.html
 he definitely does *not* "confirm" Penrose's argument! He says in the paper 
that Penrose has two basic arguments for his conclusions about consciousness, 
and at the end of the section titled "the first argument" he concludes that the 
first one fails:


"2.16 It is section 3.3 that carries the burden of this strand of Penrose's 
argument, but unfortunately it seems to be one of the least convincing sections 
in the book. By his assumption that the relevant class of computational systems 
are all straightforward axiom-and-rules system, Penrose is not taking AI 
seriously, and certainly is not doing enough to establish his conclusion that 
physics is uncomputable. I conclude that none of Penrose's argument up to this 
point put a dent in the natural AI position: that our reasoning powers may be 
captured by a sound formal system F, where we cannot determine that F is sound."


Then when dealing with Penrose's "second argument", he says that Penrose draws 
the wrong conclusions; where Penrose concludes that our reasoning cannot be the 
product of any formal system, Chalmers concludes that the actual issue is that 
we cannot be 100% sure our reasoning is "sound" (which I understand to mean we 
can never be 100% sure that we have not made a false conclusion about whether 
all the propositions we have proved true or false actually have that 
truth-value in "true arithmetic"):


"3.12 We can see, then, that the assumption that we know we are sound leads to 
a contradiction. One might try to pin the blame on one of the other 
assumptions, but all these seem quite straightforward. Indeed, these include 
the sort of implicit assumptions that Penrose appeals to in his arguments all 
the time. Indeed, one could make the case that all of premises (1)-(4) are 
implicitly appealed to in Penrose's main argument. For the purposes of the 
argument against Penrose, it does not really matter which we blame for the 
contradiction, but I think it is fairly clear that it is the assumption that 
the system knows that it is sound that causes most of the damage. It is this 
assumption, then, that should be withdrawn. 


"3.13 Penrose has therefore pointed to a false culprit. When the contradiction 
is reached, he pins the blame on the assumption that our reasoning powers are 
captured by a formal system F. But the argument above shows that this 
assumption is inessential in reaching the contradiction: A similar 
contradiction, via a not dissimilar sort of argument, can be reached even in 
the absence of that assumption. It follows that the responsibility for the 
contradiction lies elsewhere than in the assumption of computability. It is the 
assumption about knowledge of soundness that should be withdrawn.


"3.14 Still, Penrose's argument has succeeded in clarifying some issues. In a 
sense, it shows where the deepest flaw in G?elian arguments lies. One might 
have thought that the deepest flaw lay in the unjustified claim that one can 
see the soundness of certain formal systems that underlie our own reasoning. 
But in fact, if the above analysis is correct, the deepest flaw lies in the 
assumption that we know that we are sound. All G?elian arguments appeal to this 
premise somewhere, but in fact the premise generates a contradiction. Perhaps 
we are sound, but we cannot know unassailably that we are sound."


So it seems Chalmers would have no problem with the "natural AI" position he 
discussed earlier, that our reasoning could be adequately captured by a 
computer simulation that did not come to its top-level conclusions about 
mathematics via a strict axiom/proof method involving the mathematical 
questions themselves, but rather by some underlying fallible structure like a 
neural network. The bottom-level behavior of the simulated neurons themselves 
would be deducible given the initial state of the system using the axiom/proof 
method, but that doesn't mean the system as a whole might not make errors in 
mathematical calculations; see Douglas Hofstadter's discussion of this issue 
starting on p. 571 of "Godel Escher Bach", the section titled "Irrational and 
Rational Can Coexist on Different Levels", where he writes:


"Another way to gain perspective on this is to remember that a brain, too, is a 
collection of faultlessly functioning element-neurons. Whenever a neuron's 
threshold is surpassed by the sum of the incoming signals, BANG!-it fires. It 
never happens that a neuron forgets its arithmetical knowledge-carelessly 
adding its inputs and getting a wrong answer. Even when a neuron dies, it 
continues to function correctly, in the sense that its components continue to 
obey the laws of mathematics and physics. Yet as we all know, neurons are 
perfectly capable of supporting high-level behavior that is wrong, on its own 
level, in the most amazing ways. Figure 109 is meant to illustrate such a class 
of levels: an incorrect belief held in the software of a mind, supported by the 
hardware of a faultlessly functioning brain."


Figure 109 depicts the outline of a person's head with "2+2=5" appearing inside 
it, but the symbols in "2+2=5" are actually made up of large collections of 
smaller mathematical equations, like "7+7=14", which are all correct. A nice 
way of illustrating the idea, I think.?


I came up with my own thought-experiment to show where Penrose's argument goes 
wrong, based on the same conclusion that Chalmers reached: a community of 
"realistic" AIs whose simulated brains work similarly to real human brains 
would never be able to be 100% certain that they had not reached a false 
conclusion about arithmetic, and the very act of stating confidently in 
mathematical that they would never reach a wrong conclusion would ensure that 
they were endorsing a false proposition about arithmetic. See my discussion 
with LauLuna on the "Penrose and algorithms" thread here: 
http://groups.google.com/group/everything-list/browse_thread/thread/c92723e0ef1a480c/429e70be57d2940b?#429e70be57d2940b


Jesse?


On Thu, Aug 23, 2012 at 6:38 PM, Stephen P. King <stephe...@charter.net> wrote:

Dear Richard,

?? Your paper is very interesting. It reminds me a lot of Stephen Wolfram's 
cellular automaton theory. I only have one big problem with it. The 10d 
manifold would be a single fixed structure that, while conceivably capable of 
running the computations and/or implementing the Peano arithmetic, has a 
problem with the role of time in it. You might have a solution to this problem 
that I see that I did not deduce as I read your paper. How do you define time 
for your model?

-- 
Onward!

Stephen

"Nature, to be commanded, must be obeyed." 
~ Francis Bacon
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to