I will try to answer several posts here. I said that the knowledge base of an
AGI must be opaque because it has 10^9 bits of information, which is more than
a person can comprehend. By opaque, I mean that you can't do any better by
examining or modifying the internal representation than you co
Very good, I agree,and this is one of the requirements for the Project Halo contest (took and passed the AP chemistry exam)http://www.projecthalo.com/halotempl.asp?cid=30Also it is a critical task for expert systems to explain why they are doing what they are doing, and for business application, I
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and reas
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK <[EMAIL PROTECTED]> wrote: On 11/14/06, James Ratcliff wrote:> If the "contents of a knowledge base for AGI will be beyond our ability
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
It is possible in principle, but even given the probabilistic logic
semantics of the system's knowledge it's not pragmatic, because
somet
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you mean it will have more knowledge than we
I concur here, also it was quoted that earlier that an AGI couldnt be understood because humans cant understand the brain.So when we become able to understand the brain, will this view be reversed? Or is the thought that we will NEVER be able to understand the brain. Because while I believe it to
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing up
Richard Loosemoore
> As for your suggestion about the problem being centered on the use of
> model-theoretic semantics, I have a couple of remarks.
>
> One is that YES! this is a crucial issue, and I am so glad to see you
> mention it. I am going to have to read your paper and discuss with you
On 11/14/06, James Ratcliff wrote:
If the "contents of a knowledge base for AGI will be beyond our ability to
comprehend" then it is probably not human level AGI, it is something
entirely new, and it will be alien and completely foriegn and unable to
interact with us at all, correct?
If you me
>> Models
that are simple enough to debug are too simple to
scale.
>> The contents of a knowledge base for AGI will be beyond our
ability to comprehend.
Given sufficient time, anything
should be able to be understood and debugged. Size alone does not make
something incomprehensible
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
Pragmatically possible obscures the point I was trying to make with
Matt. If you were to freeze-frame Novamente right after it took
12 matches
Mail list logo