Charles,

I find this perspective interesting. Given what logicians know so far,
it is more plausible that there is not one right logic, but merely a
hierarchy of better/worse/different logics. My search for the "top" is
somewhat unjustified (but I cannot help myself from thinking that
there must be a top). Nonetheless, the image of evolution randomly
experimenting in the space of possible logics, and simply finding very
powerful logics rather than this "top" of mine (even if it exists), is
quite reasonable.

But, I cannot help from saying it... if this is the right perspective,
then evolution itself could be seen as the "top", the correct logic. I
am not sure what this view implies.

--Abram

On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> Abram Demski wrote:
>>
>> On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]>
>>> wrote:
>>>
>>>>
>>>> The ... the moment I want to ignore computational resources...
>>>>
>>>
>>> Ok but what are you getting at?
>>>
>>
>> I had a friend who would win arguments in high school by saying
>> "what's your point?" after a long back-and-forth, shifting the burden
>> on me to show that what I was arguing was not only true but
>> important... which it often wasn't. :)
>>
>> Part of the point is to answer the question "What do we mean when we
>> refer to mathematical entities?". Part of the point is to find the
>> point is to find the correct logic, rejecting the notion that logics
>> are simply different, not better or worse*. Part of the point is that
>> I am worried-- worried that an AGI system based on anything less than
>> the one most powerful logic will be able to fool AGI researchers for a
>> long time into thinking that it is capable of general intelligence.
>> Several examples-- Artificial neural networks in their currently most
>> popular form are limited to models that a logical might call
>> "0th-order" or "propositional", not even first-order, yet they are
>> powerful enough to solve many problems. It is thus easy to think that...
>>
>
> FWIW, I doubt that any AGI is actually possible.  I'm reasonably certain
> that it's possible to get closer than people are, but we aren't really even
> an attempt at a fully general AI.  I have a strong suspicion that things
> analogous the the halting problem and Gödel's incompleteness theorem are
> lurking.
>
> As such, I don't think it's reasonable to worry about implementing the "most
> powerful logic".  Anything that gets implemented will be incomplete (or
> self-contradictory).  People seem to have evolved to go with
> self-contradictory.
>
> As such, my "solution" is like the solution to the "global maximization of
> hill-climbing"...the best solution is to start in lots of different places
> that each find their own local optimum.  You still won't find the global
> optimum except by chance, but you can get a lot closer.  I don't like
> thinking of this as relaxation or annealing, but I'm not sure why.  Possibly
> because they usually use smaller chunks than I think best.  I don't think
> the surface is sufficiently homogeneous to use the same approach in every
> locale, except on a very large scale.  (And by writing this I'm probably
> revealing my ignorance [profound] of the techniques.)
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to