To me, this kind of result means nothing. I expect any sentient being to
operate with simultaneous contradicting theories, pardon my french but it
may want to watch the movie at the same time it wants to pee :) (want and
need are somewhat synonymous here, meaning there is a whole theory why you
should keep watching and a whole other theory why you should be peeing at
some other location).

Surely, something clever will have to happen to reconcile those zillions of
disjoint theories/agents, but it would be asking for too much to avoid the
hard work via one proof or another. I am also afraid that a mathematical
resolution will also stifle creativity, ie you would be prevented from
peeing in a bottle or whatever. Obviously mathematicians will keep chasing
that pink elephant :)

AT


On Tue, Aug 26, 2014 at 4:55 PM, Ben Goertzel via AGI <[email protected]>
wrote:

>
> To me, their result is useful in that it tells me that OpenCog's PLN
> logical inference component is unlikely to encounter a Godel-type paradox
> when reflecting on itself, due to its use of imprecise truth values to
> quantify the strength of logical relationships...
>
>
> On Tue, Aug 26, 2014 at 8:40 PM, Jim Bromer via AGI <[email protected]>
> wrote:
>
>> It's not really a sleight of hand...
>>
>> I mean, if you can say "This sentence is false" has a truth value of 0.5,
>> without having to assign it a value of 0 or 1, then you have a lot more
>> flexibility in avoiding paradox....  What they are doing is a fancy version
>> of that, which works more generally...
>>
>> ben
>>
>>
>> But it is not an effective way to avoid paradox. (And I know that you
>> already know that).
>>
>>  I always wonder if the ideas in these papers have any practical use. For
>> instance, some problems, like appropriate engineering problems, do have
>> effective ways to increase the accuracy of the approximations given the
>> result of some test. There is still a problem here. If the empirical method
>> is applied incorrectly (or there is a variation which means that has to be
>> compensated for) then successive 'refinements' of the test may not produce
>> more accurate results. And that makes me think. Just because the results of
>> successive tests are narrowed in to a particular reading that does not mean
>> that the result is necessarily more accurate because there is a possibility
>> that the variation of the problem needs to be adjusted for some unusual
>> feature.
>>
>> So a practical value of their method seems to be limited to problems that
>> are both appropriate and have well defined test methods that can give more
>> precise results given some kind of refining process.
>>
>> But their idea might be useful in the recognition that some refinement
>> process does not produce more precise results once a certain point is
>> reached. By trying various ways to adjust the testing process the system
>> might be able to find results which do seem to improve the results.
>>
>> Jim Bromer
>>
>>
>> On Mon, Aug 25, 2014 at 12:17 PM, Ben Goertzel via AGI <[email protected]>
>> wrote:
>>
>>>
>>> It's not really a sleight of hand...
>>>
>>> I mean, if you can say "This sentence is false" has a truth value of
>>> 0.5, without having to assign it a value of 0 or 1, then you have a lot
>>> more flexibility in avoiding paradox....  What they are doing is a fancy
>>> version of that, which works more generally...
>>>
>>> ben
>>>
>>>
>>> On Tue, Aug 26, 2014 at 12:00 AM, Mike Archbold <[email protected]>
>>> wrote:
>>>
>>>> I took a stab at the paper and it seemed like they were trying to get
>>>> outside the system with a sleight of hand involving probabilities.  It
>>>> seems like they are writing for a very small in-group.  Ben:  I think
>>>> your writing is clear.  I've been working through your book.  People
>>>> should write high-fallutin' metamath papers more like that.
>>>>
>>>> On 8/25/14, Ben Goertzel via AGI <[email protected]> wrote:
>>>> > ***
>>>> >
>>>> > So the system in the paper by the MIRI guys seems to be based on a
>>>> logical
>>>> > language of analysis that would rule out certain kinds of sentences
>>>> if they
>>>> > tended toward not being logically evaluable.
>>>> > ***
>>>> >
>>>> > No, not really; you seem to not understand their theorem  ;p
>>>> >
>>>> >
>>>> >
>>>> > -------------------------------------------
>>>> > AGI
>>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>>> > RSS Feed:
>>>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>>> > Modify Your Subscription:
>>>> > https://www.listbox.com/member/?&;
>>>>
>>>> > Powered by Listbox: http://www.listbox.com
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> http://goertzel.org
>>>
>>> "In an insane world, the sane man must appear to be insane". -- Capt.
>>> James T. Kirk
>>>
>>> "Emancipate yourself from mental slavery / None but ourselves can free
>>> our minds" -- Robert Nesta Marley
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | Modify
>> <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "In an insane world, the sane man must appear to be insane". -- Capt.
> James T. Kirk
>
> "Emancipate yourself from mental slavery / None but ourselves can free our
> minds" -- Robert Nesta Marley
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/14050631-7d925eb1> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to