On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Jim,
> You are right to call me on that. I need to provide an argument that,
> if no logic satisfying B exists, human-level AGI is impossible.

I don't know why I am being so aggressive these days.  I don't start
out intending to be in everyone's face.

If a logical system is incapable of representing a context of a
problem then it cannot be said that it (the logical system) implies
that the problem cannot be solved in the system.  You are able to come
to a conclusion like that because you can transcend the supposed
logical boundaries of the system.  I am really disappointed that you
did not understand what I was saying, because your unusual social
skills make you seem unusually capable of understanding what other
people are saying.

However, your idea is interesting so I am glad that you helped clarify
it.  I have additional comments below.
>
> B1: A foundational logic for a human-level intelligence should be
> capable of expressing any concept that a human can meaningfully
> express.
>
> If a broad enough interpretation of the word "logic" is taken, this
> statement is obvious; it could amount to simply "A human level
> intelligence should be capable of expressing anything it can
> meaningfully express". (ie, logic = way of operating.)
>
> The key idea for me is that logic is not the way we *do* think, it is
> the way we *should* think, in the ideal situation of infinite
> computational resources. So, a more refined B would state:
>
> B2: The theoretical ideal of how a human-level intelligence should
> think, should capture everything worth capturing about the way humans
> actually do think.
>
> "Everything worth capturing" means everything that could lead to good results.
>
> So, I argue, if no logic exists satisfying B2, then human-level
> artificial intelligence is not possible. In fact, I think the negation
> of B2 is nonsensical:
>
> not-B2: There is no concept of how a human-level intelligence should
> think that captures everything worth capturing about how humans do
> think.
>
> This seems to imply that humans do not exist, since the way humans
> actually *do* think captures everything worth capturing (as well as
> some things not worth capturing) about how we think.
>
> -Abram

Well it doesn't actually imply that humans do not exist.  (What have
you been smoking?) I would say that B2 should be potentially capable
of capturing anything of human thinking worth capturing.  But why
would I say that?  Just because it makes it a little more feasible?
Or is there some more significant reason?  Again, B2 should be capable
of potentially capturing anything from an individual's thinking given
the base of the expression of those possibilities.  No single human
mind captures everything possible in human thought.  So in this case
my suggested refinement is based on the feasible extent of the
potential of a single mind as opposed to billions of individual mind.
This makes sense, but again the refinement is derived from what I
think would be more feasible. Since there are more limited
representations of what seems to be models of the way human think,
then there is no contradiction.  There are just a series of
constraints.
Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to