I'm following up on this idea of human concepts and our unique way of
reasoning that is still very poorly understood. Sure, we describe it in
various ways, such as non-monotonic, defeasible, abduction, induction,
generalization. But, when it really comes down to it, we don't seem to have
any idea how the brain is really doing all this. We only understand some
aspects of these things in isolation. But it is extremely difficult to put a
bigger picture together.

So, I'm back to looking at this fundamental problem, which I describe
as: "*What
is the knowledge we need to solve problems; how is it used to solve
problems; how is it acquired; how is it updated; and how is it modified.*"

I realized that when I was trying to organize the requirements of this
design to solve problems, I simply didn't have enough data or example
problems to come up with an answer!

Children gain massive amounts of knowledge as they learn. So, their earlier
abilities are more representative of what can be done and what can be
learned with less knowledge. But, it is still hard to come up with example
test cases for those situations.

I can also make assumptions at various levels of advancement, such as
assuming the AI has enough knowledge to read and understand problem
descriptions. Then figure out how it might reason about it. But, the thing
is that we need to understand how the AI bootstraps its knowledge
acquisition process and how it is able to start with such little knowledge
and figure things out very robustly. Otherwise, the assumptions you make
will not hold.

So, I guess my question is, how do we organize the requirements and test
cases to help us design an AGI? I have some ideas. But, what do you guys
think? What are the requirements? What are the test cases? (besides grasping
a rock, lol)



On Wed, Aug 18, 2010 at 2:00 PM, David Jones <davidher...@gmail.com> wrote:
Actually Mike, I am almost to the point of defining the idea you talk about
here. In my research on how children learn language while observing the
environment, I've discovered some insight into how this acquisition of
concepts is done and how they are used. But, I'm stuck doing some work right
now and I haven't gotten enough time to explore this more.


On Wed, Aug 18, 2010 at 1:47 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
 Peter V:I believe that the key aspect of human-level intelligence that
we’re interested in is our ability to learn and think in abstract concepts.

Well, concepts period. Yes, I agree that concepts are an essential nucleus
of what distinguishes AGI/human thought from narrow AI. IOW the capacity to
think:

"I have a problem here - now how am I going to go about solving it? - how
shall I define the problem? - what parts shall I think about first..?"

Concepts  - such as "go" "do" "think" "problem" "solve" -  give a mind
"classical" powers of generalisation -  as distinct from merely specialist
powers, (I'm thinking of the distinction between "class" and "species" in
biology).  Concepts give the brain the ability to think of every problem and
method of solution as just one of an open-ended class of similar problems
and methods -  and are the basis of creativity and diversity of approach.

But no one here is interested in discussing them on this level - only on the
specialist level, with clearly defined semantic networks and the like
attached - wh. are not how human concepts work at all.
 *singularity* | Archives<https://www.listbox.com/member/archive/11983/=now>
<https://www.listbox.com/member/archive/rss/11983/> |
Modify<https://www.listbox.com/member/?&;>Your
Subscription



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to