On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> The paradox seems trivial, of course. I generally agree with your
>> analysis (describing how we consider the sentence, take into account
>> its context, and so on. But the big surprise to logicians was that the
>> paradox is not just a lingual curiosity, it is an essential feature of
>> any logic satisfying some broad, seemingly reasonable requirements.
>>
>> A logical "sentence" corresponds better to a concept/idea, so bringing
>> in the lingual context and so on does not help much in the logic-based
>> version (although I readily admit that it solves the paradox in the
>> lingual form I presented it in my previous email). The question
>> becomes, does the system allow "This thought is false" to be thought,
>> and if so, how does it deal with it? Intuitively it seems that we
>> cannot think such a silly concept.
>
>> you said "I don't think the problem of self-reference is
>> significantly more difficult than the problem of general reference",
>> so I will say "I don't think the frame problem is significantly more
>> difficult than the problem of general inference." And like I said, for
>> the moment I want to ignore computational resources...
>
> Ok but what are you getting at?

I had a friend who would win arguments in high school by saying
"what's your point?" after a long back-and-forth, shifting the burden
on me to show that what I was arguing was not only true but
important... which it often wasn't. :)

Part of the point is to answer the question "What do we mean when we
refer to mathematical entities?". Part of the point is to find the
point is to find the correct logic, rejecting the notion that logics
are simply different, not better or worse*. Part of the point is that
I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.
Several examples-- Artificial neural networks in their currently most
popular form are limited to models that a logical might call
"0th-order" or "propositional", not even first-order, yet they are
powerful enough to solve many problems. It is thus easy to think that
the problem is just computational power. The currently popular AIXI
model could (if it were built) learn to skillfully manipulate all
sorts of mathematical formalisms, and speak in a convincing manner to
human mathematicians. Yet, it is easy to see from AIXI's definition
that it will not actually apply any math it learns to model the world,
since it has a hardwired assumption that the universe is actually
computable. (I don't know if you're familiar with AIXI, just ignore
this example if not...)

*(I am being a bit extreme here. One logic can be the right logic for
one purpose, while another is the right logic for a different purpose.
Two logics can turn out to be equivalent, therefore about as good for
any purpose. But, I am saying that there should be some set of logics,
all inter-equivalent, that are the right logic for the "broadest
possible" purpose-- that is, reasoning.)

>  I don't want to stop you from going
> on and explaining what it is that you are getting at, but I want to
> tell you about another criticism I developed from talking to people
> who asserted that everything could be logically reduced (and in
> particular anything an AI program could do could be logically
> reduced.)  I finally realized that what they were saying could be
> reduced to something along the lines of "If I could understand
> everything then I could understand everything."

EXACTLY!

Or, um, rather, yes. That is what I am getting at. If I could
understand everything then I would understand everything. It is an odd
way of putting it, but, true.

> I mentioned that to
> the guys I was talking to but I don't think that they really got it.
> Or at least they didn't like it. I think you might find yourself on
> the same lane if you don't keep your eyes open.  But I really want to
> know what where it is you are going.

It seems to me that these people must have been arguing with you
because they saw certain points you were making as essentially
illogical, and got caught up trying to explain something that was
utterly obvious to them but which they thought you were denying. So
you came back to them and said that their point was utterly obvious,
which was true.

>
> I just read the message that you referred to in OpenCog Prime wikibook
> and... I really didn't understand it completely but I still don't
> understand what the problem is.

I am somewhat confused. I do not remember referring to the wikibook,
and didn't find the reference with a brief sweep of the emails I've
sent on this thread.

> You should realize that you cannot
> expect to use inductive processes to create a single logical theory
> about everything that can be understood. I once discussed things with
> Pei and he agreed that the representational system that contains the
> references to ideas can be logical even though the references may not
> be.  So a debugged referential program does not mean that the system
> that the references referred to have to be perfectly sound. We can
> consider paradoxes and the like.

I think what you are saying is related to the idea that an AGI would
not have direct access to its base-level logic. So, for example, just
because it has a fast adder on its chip does not mean that it is good
at addition problems. But, there needs to be some level at which the
system is capable of accessing its own "mental stuff". So, what I am
asserting is that at this level the program's behavior should follow
some logic. And, in the absence of resource restrictions, I would like
to know what sort of logic *should* be there.

>
> Your argument sounds as if you are saying that a working AI system,
> because it would be perfectly logical would imply that the Goedel
> Theorem and the Halting Problem weren't problems.

That is not possible, since there is no way to solve the halting
problem... the ideal logic would only solve the halting problem in
cases that humans could solve. Currently our best logics can solve a
certain subset of cases, but humans seem to have the ability to come
up with additional true axioms. (There is a camp that claims that
humans can solve the halting  problem totally, but I don't find this
especially plausible.)

As for Goedel's theorem, the important question would similarly be to
what extent humans can escape it (again, it isn't especially plausible
that we can actually get around it... but, I feel that Goedel's
original proof should not apply to us.)

> But I have already
> expressed my point of view on this, I don't think that the ideas that
> an AI program can create are going to be integrated into a perfectly
> logical system.  We can use logical sentences to input ideas very
> effectively as you pointed out. But that does not mean that those
> logical sentences have to be integrated into a single sound logical
> system.

But these individual ideas need to be integrated by *some* system, be
it "sound" or not. My question is what the nature of that system is--
in particular, how does it deal with [insert various historical
problems in mathematical logic]. The answer may well be "those
problems simply don't apply, because this system looks totally
different than the specific logical settings those problems were
discovered in". But if that is the case, I want to know it, and I want
to know why.

>
> Where are you going with this?

Well, to answer that, just go back to my original post-- I mean, all
I'm doing is arguing that these questions are important. I don't claim
to have the answers!

More broadly speaking, my concern is what I mentioned earlier-- making
sure that AGI doesn't run up against the "almost-everything" barrier.
Simplistic neural networks can do almost everything a narrow-AI
researcher might want (given enough data...). First-order logic can do
almost everything a mathematician might want (given enough extra
axioms...). AIXI can theoretically learn anything (given that that
thing is computable...).

Hope this clarifies.

> Jim Bromer
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to