On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> The paradox seems trivial, of course. I generally agree with your
> analysis (describing how we consider the sentence, take into account
> its context, and so on. But the big surprise to logicians was that the
> paradox is not just a lingual curiosity, it is an essential feature of
> any logic satisfying some broad, seemingly reasonable requirements.
>
> A logical "sentence" corresponds better to a concept/idea, so bringing
> in the lingual context and so on does not help much in the logic-based
> version (although I readily admit that it solves the paradox in the
> lingual form I presented it in my previous email). The question
> becomes, does the system allow "This thought is false" to be thought,
> and if so, how does it deal with it? Intuitively it seems that we
> cannot think such a silly concept.

> you said "I don't think the problem of self-reference is
> significantly more difficult than the problem of general reference",
> so I will say "I don't think the frame problem is significantly more
> difficult than the problem of general inference." And like I said, for
> the moment I want to ignore computational resources...

Ok but what are you getting at?  I don't want to stop you from going
on and explaining what it is that you are getting at, but I want to
tell you about another criticism I developed from talking to people
who asserted that everything could be logically reduced (and in
particular anything an AI program could do could be logically
reduced.)  I finally realized that what they were saying could be
reduced to something along the lines of "If I could understand
everything then I could understand everything."  I mentioned that to
the guys I was talking to but I don't think that they really got it.
Or at least they didn't like it. I think you might find yourself on
the same lane if you don't keep your eyes open.  But I really want to
know what where it is you are going.

I just read the message that you referred to in OpenCog Prime wikibook
and... I really didn't understand it completely but I still don't
understand what the problem is.  You should realize that you cannot
expect to use inductive processes to create a single logical theory
about everything that can be understood.  I once discussed things with
Pei and he agreed that the representational system that contains the
references to ideas can be logical even though the references may not
be.  So a debugged referential program does not mean that the system
that the references referred to have to be perfectly sound. We can
consider paradoxes and the like.

Your argument sounds as if you are saying that a working AI system,
because it would be perfectly logical would imply that the Goedel
Theorem and the Halting Problem weren't problems.  But I have already
expressed my point of view on this, I don't think that the ideas that
an AI program can create are going to be integrated into a perfectly
logical system.  We can use logical sentences to input ideas very
effectively as you pointed out. But that does not mean that those
logical sentences have to be integrated into a single sound logical
system.

Where are you going with this?
Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to