The paradox (I assume that is what you were pointing to) is based on
your idealized presentation.  Not only was your presentation
idealized, but it was also exaggerated.

I sometimes wonder why idealizations can be so effective in some
cases. An idealization is actually an imperfect way of thinking about
the world.  I think that logical idealizations are effective because
as they are refined by knowledge of reality they can illuminate
effective relations by clearing  non-central relations out of the way.

And idealizations can lead quickly toward feasible tests if they are
refined towards feasibility based on relevant world experiences.  It
is a little like making some simplistic outrageous claim.  No matter
how absurd it is, if you are willing to examine it based on applicable
cases, you can learn from it.  And if you are willing to make ad-hoc
(or is it post-hoc) refinements to the claim there will be a greater
chance that it will lead toward serendipity.

If an AI program made some claim which it 'thought' it could evaluate,
then the failure of its ability to evaluate it could lead it to find
some other data which it could evaluate.  For example if it recognized
that it could not apply a claim to anything in the IO data
environment, it could subsequently try to do the same kind of thing
with some more obvious situation that it is able to reliably detect in
the IO environment.  If its programming leads it  toward
generalization then it can create systems to detect what it considers
to be kinds of data events.

Jim Bromer

On Thu, Aug 14, 2008 at 4:26 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> Jim,
>> You are right to call me on that. I need to provide an argument that,
>> if no logic satisfying B exists, human-level AGI is impossible.
>
> I don't know why I am being so aggressive these days.  I don't start
> out intending to be in everyone's face.
>
> If a logical system is incapable of representing a context of a
> problem then it cannot be said that it (the logical system) implies
> that the problem cannot be solved in the system.  You are able to come
> to a conclusion like that because you can transcend the supposed
> logical boundaries of the system.  I am really disappointed that you
> did not understand what I was saying, because your unusual social
> skills make you seem unusually capable of understanding what other
> people are saying.
>
> However, your idea is interesting so I am glad that you helped clarify
> it.  I have additional comments below.
>>
>> B1: A foundational logic for a human-level intelligence should be
>> capable of expressing any concept that a human can meaningfully
>> express.
>>
>> If a broad enough interpretation of the word "logic" is taken, this
>> statement is obvious; it could amount to simply "A human level
>> intelligence should be capable of expressing anything it can
>> meaningfully express". (ie, logic = way of operating.)
>>
>> The key idea for me is that logic is not the way we *do* think, it is
>> the way we *should* think, in the ideal situation of infinite
>> computational resources. So, a more refined B would state:
>>
>> B2: The theoretical ideal of how a human-level intelligence should
>> think, should capture everything worth capturing about the way humans
>> actually do think.
>>
>> "Everything worth capturing" means everything that could lead to good 
>> results.
>>
>> So, I argue, if no logic exists satisfying B2, then human-level
>> artificial intelligence is not possible. In fact, I think the negation
>> of B2 is nonsensical:
>>
>> not-B2: There is no concept of how a human-level intelligence should
>> think that captures everything worth capturing about how humans do
>> think.
>>
>> This seems to imply that humans do not exist, since the way humans
>> actually *do* think captures everything worth capturing (as well as
>> some things not worth capturing) about how we think.
>>
>> -Abram
>
> Well it doesn't actually imply that humans do not exist.  (What have
> you been smoking?) I would say that B2 should be potentially capable
> of capturing anything of human thinking worth capturing.  But why
> would I say that?  Just because it makes it a little more feasible?
> Or is there some more significant reason?  Again, B2 should be capable
> of potentially capturing anything from an individual's thinking given
> the base of the expression of those possibilities.  No single human
> mind captures everything possible in human thought.  So in this case
> my suggested refinement is based on the feasible extent of the
> potential of a single mind as opposed to billions of individual mind.
> This makes sense, but again the refinement is derived from what I
> think would be more feasible. Since there are more limited
> representations of what seems to be models of the way human think,
> then there is no contradiction.  There are just a series of
> constraints.
> Jim Bromer
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to