I don't completely understand the paper but the underlying belief that if
you use weighted reasoning or probability reasoning then the problem of
'understanding' can be resolved by methods of increasing the accuracy of
successive approximations seems a little presumptuous to me. The reliance
of methods which are based on this kind of premise may be the foundation of
some of the applied science for technicians but it is not a
secure foundation for research science or even for engineering.

While I think their idea is interesting (if I understand it as far as I
have gotten) I don't think that you can deny Godelian paradoxes just by
defining an algorithm which is designed to avoid them.  Even assuming that
their use of the Oracle approximator is a stub for unknown processes which
could be used to examine a sentence in the induction process, the denial
that the strings that are being produced by the algorithm has to constitute
a logical Language is a little weak. And if the examination of successive
approximations is based on a poor theoretical analysis then paradoxes be
insipidly inducted into the system.

When I was first told about paradoxes I am sure the concept went right over
my head. I believe I eventually discovered paradoxes in general rules of
how I was supposed to behave or in trying to find good ways to respond to
situations. Because of the importance of being able to use a language to
consider the nature of paradoxes, any foundational AGI mathematical system
has to be able to deal with paradoxes that can be analyzed even it they
can't be valued as True or False. How could a Language (with a fancy L) be
considered as a good theoretical underpinning for an AGI project if it
cannot hold paradoxes? The Godelian paradox seems to be a feature of any
Language that is strong enough to be dependent on evaluation across
a finite or infinite iteration via a theoretical representation.  That
means that the valuation of a sentence may need to be based on more
information that might be represented using a virtual meta-language. And
the typical nature of a logic statement is that its evaluation may require
more than a single simple evaluation step. A logical sentence may be
partially evaluated but the evaluation of the entire sentence will usually
need to take more than one step.

I think Logical Positivism faded away because the languages of
the Positivists were not strong enough to be used to analyze logical
statements other than those that were already evaluated.  I suspect that
you might not be technically able to use a truly Positivist Language to
study logical statements because some logical statements can be paradoxical.

It is important to be able to work with apparent paradoxes in order to
discover how the paradoxes might be resolved. While their Oracle would be
able to tell you that a resolution was more likely after taking successive
analytical steps, that does not always (or usually) hold for practical
methods of study.

I am making what is probably my last effort to find an effective solution
to Boolean Satisfiability not because I think that Logic would be
sufficient for an AGI language of thought but because I think that a
polynomial time solution would provide a great deal of leverage for a
program capable of 'thought' to work with.


Jim Bromer


On Thu, Aug 21, 2014 at 1:02 PM, Ben Goertzel via AGI <[email protected]>
wrote:

>
> This time some of the MIRI (Singularity Institute) guys have discovered a
> fairly cool mathematical nugget, that relates a bit to OpenCog...
>
> http://intelligence.org/files/DefinabilityTruthDraft.pdf
>
> What they show here is basically that some paradoxes of "reflection" in
> logical systems go away if one considers only statements with (open)
> interval truth values, rather than single-point truth values...
>
> This has direct implications for PLN, which uses imprecise (i.e. interval)
> truth values....  It shows that, in a sense, PLN can be reflective without
> spawning nasty Godel paradoxes....  It shows that one can define "truth
> within PLN" within PLN, without running into Godelian/Tarskian
> limitations....  Kinda cool...
>
> -- Ben
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "In an insane world, the sane man must appear to be insane". -- Capt.
> James T. Kirk
>
> "Emancipate yourself from mental slavery / None but ourselves can free our
> minds" -- Robert Nesta Marley
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to