> I don't think the problems of a self-referential paradox is
> significantly more difficult than the problems of general reference.
> Not only are there implicit boundaries, some of which have to be
> changed in an instant as the conversation develops, there are also
> multiple levels of generalization in conversation.  These multiple
> levels of generalization are not simple or even reliably constructive
> (reinforcing).  They are complex and typically contradictory.  In my
> opinion they can be understood because we are somehow able access
> different kinds of relevant information necessary to decode them.

The paradox seems trivial, of course. I generally agree with your
analysis (describing how we consider the sentence, take into account
its context, and so on. But the big surprise to logicians was that the
paradox is not just a lingual curiosity, it is an essential feature of
any logic satisfying some broad, seemingly reasonable requirements.

A logical "sentence" corresponds better to a concept/idea, so bringing
in the lingual context and so on does not help much in the logic-based
version (although I readily admit that it solves the paradox in the
lingual form I presented it in my previous email). The question
becomes, does the system allow "This thought is false" to be thought,
and if so, how does it deal with it? Intuitively it seems that we
cannot think such a silly concept. (Oh, and don't let the quotes
around it make you try to just think the sentence... I can say "This
thought is false" in my head, but can I actually think a thought that
asserts its own falsehood? Not so sure...)

> This is one reason why I think that the Relevancy Problem of the Frame
> Problem is the primary problem of contemporary AI.  We need to be able
> to access relevant information even though the appropriate information
> may change dramatically in response to the most minror variations in
> the comprehension of a sentence or of a situation.

Well, you said "I don't think the problem of self-reference is
significantly more difficult than the problem of general reference",
so I will say "I don't think the frame problem is significantly more
difficult than the problem of general inference." And like I said, for
the moment I want to ignore computational resources...

On Fri, Aug 15, 2008 at 2:21 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Our ability to think about abstractions and extrapolations off of
> abstractions comes because we are able to create game boundaries
> around the systems that we think about.  So yes you can talk about
> infinite resources and compare it to the domain of the lambda
> calculus, but this kind of thinking is possible only because we are
> able to abstract ideas by creating rules and barriers for the games.
> People don't always think of these as games because they can be so
> effective at producing material change that they seem and can be as
> practical as truck, or as armies of trucks.
>
>> It is possible that your logic, fleshed out, could circumnavigate the
>> issue. Perhaps you can provide some intuition about how such a logic
>> should deal with the following line of argument (most will have seen
>> it, but I repeat it for concreteness):
>>
>> "Consider the sentence "This sentence is false". It is either true or
>> false. If it is true, then it is false. If it is false, then it is
>> true. In either case, it is both true and false. Therefore, it is both
>> true and false."
>
> Why?  I mean that my imagined program is a little like a method actor
> (like Marlon Brando).  What is its motivation?  Is it a children's
> game?  A little like listening to ghost stories? Or watching movies
> about the undead?
>
> The sentence, 'this sentence is false,' obviously relates to a
> boundary around the sentence. However, that insight wasn't obvious to
> me every time I came across the sentence.  Why not?  I don't know, but
> I think that when statements like that are unfamiliar, you put them
> into their own abstracted place and wait to see how it they are going
> to be used relative to other information.
>
> Let's go with your statement and suppose that the argument is
> unfamiliar.  Basically, the first step would be to interpret the
> elementary partial meanings of the sentences without necessarily
> integrating them.  Each sentence is put into a temporary boundary.
> 'It is either true or false.'  Ok got it, but since this kind of
> argument is unfamiliar to my imaginary program, it does not
> immediately realize that the second sentence is referring to the
> first.  Why not?  Because the first sentence creates an aura of
> reference, and if the self-reference that was intended is appreciated,
> then the sense that second sentence is going to refer to the first
> sentence will - in some cases - be made less likely.  In other cases,
> the awareness that the first sentence is self referential might make
> it more likely that the next sentence will also be interpreted as
> referring to it.
>
> The practical problems of understanding the elementary relations of
> communication are so complicated, that the problem of dealing with a
> paradox is not as severe as you might think.
>
> We are able to abstract and use those abstractions in processes that
> can be likened to extrapolation because we have to be able to do that.
>
> I don't think the problems of a self-referential paradox is
> significantly more difficult than the problems of general reference.
> Not only are there implicit boundaries, some of which have to be
> changed in an instant as the conversation develops, there are also
> multiple levels of generalization in conversation.  These multiple
> levels of generalization are not simple or even reliably constructive
> (reinforcing).  They are complex and typically contradictory.  In my
> opinion they can be understood because we are somehow able access
> different kinds of relevant information necessary to decode them.
>
> This is one reason why I think that the Relevancy Problem of the Frame
> Problem is the primary problem of contemporary AI.  We need to be able
> to access relevant information even though the appropriate information
> may change dramatically in response to the most minror variations in
> the comprehension of a sentence or of a situation.
>
> I didn't write much about the self-referential paradox because I think
> it is somewhat trivial. Although an AI program will be 'logical' in
> the sense of the logic of computing machinery, that does not mean that
> a computer program has to be strictly logical.  This means that
> thinking can contain errors, but that is not front page news.  Man
> bites dog!  Now that's news.
>
> Jim Bromer
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to