Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Charles D Hixson

Mike Tintner wrote:


Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep 
human intelligence, but I don't feel that understanding of how human 
emotions are structured is a part of general intelligence except on a 
very strongly superhuman level.  The level where the AI's theory of 
your mind was on a par with, or better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've 
never come across any remarks quite so divorced from psychological 
reality. There are millions of essays out there on Hamlet, each one of 
them different. Why don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those).  In college I formed the 
definite impression that essays on the meaning of literature were 
exercises in determining what the instructor wanted.  This isn't 
something that I consider a part of general intelligence (except as 
mentioned above).


...
The reason over 70 per cent of students procrastinate when writing 
essays like this about Hamlet, (and the other 20 odd per cent also 
procrastinate but don't tell the surveys), is in part that it is 
difficult to know which of the many available approaches to take, and 
which of the odd thousand lines of text to use as support, and which 
of innumerable critics to read. And people don't have a neat structure 
for essay-writing to follow. (And people are inevitably and correctly 
afraid that it will all take if not forever then far, far too long).
The problem is that most, or at least many, of the approaches are 
defensible, but your grade will be determined by the taste of the 
instructor.  This isn't a problem of general intelligence except at a 
moderately superhuman level.  Human tastes aren't reasonable ingredients 
for an entry level general intelligence.  Making it a requirement merely 
ensures that one will never be developed (whose development attends to 
your theories of what's required).


...

In short, essay writing is an excellent example of an AGI in action - 
a mind freely crossing different domains to approach a given subject 
from many fundamentally different angles.   (If any subject tends 
towards narrow AI, it is normal as opposed to creative maths).
I can see story construction as a reasonable goal for an AGI, but at the 
entry level they are going to need to be extremely simple stories.  
Remember that the goal structures of the AI won't match yours, so only 
places where the overlap is maximal are reasonable grounds for story 
construction.  Otherwise this is an area for specialized AIs, which 
isn't what we are after.


Essay writing also epitomises the NORMAL operation of the human mind. 
When was the last time you tried to - or succeeded in concentrating 
for any length of time?
I have frequently written essays and other similar works.  My goal 
structures, however, are not generalized, but rather are human.  I have 
built into me many special purpose functions for dealing with things 
like plot structure, family relationships, relative stages of growth, etc. 


As William James wrote of the normal stream of consciousness:

Instead of thoughts of concrete things patiently following one 
another in a beaten track of habitual suggestion, we have the most 
abrupt cross-cuts and transitions from one idea to another, the most 
rarefied abstractions and discriminations, the most unheard-of 
combinations of elements, the subtlest associations of analogy; in a 
word, we seem suddenly introduced into a seething caldron of ideas, 
where everything is fizzling and bobbing about in a state of 
bewildering activity, where partnerships can be joined or loosened in 
an instant, treadmill routine is unknown, and the unexpected seems the 
only law.


Ditto:

The normal condition of  the mind is one of informational disorder: 
random thoughts chase one another instead of lining up in logical 
causal sequences.

Mihaly Csikszentmihalyi

Ditto the Dhammapada,  Hard to control,  unstable is the  mind, ever 
in quest of delight,


When you have a mechanical mind that can a) write essays or tell 
stories or hold conversations  [which all present the same basic 
difficulties] and b) has a fraction of the difficulty concentrating 
that the brain does and therefore c) a fraction of the flexibility in 
crossing domains, then you might have something that actually is an AGI.


You seem to be placing an extremely high bar in place before you will 
consider something an AGI.  Accepting all that you have said, for an AGI 
to react as a human would react would require that the AGI be strongly 
superhuman.


More to the point, I wouldn't DARE create an AGI which had motivations 
similar to 

AW: AW: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Dr. Matthias Heger


Matt Mahoney [mailto:[EMAIL PROTECTED] wrote


Object oriented programming is good for organizing software but I don't
think for organizing human knowledge.  It is a very rough
approximation.  We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does not scale well and
does not allow for incremental learning from examples.  It totally does
not work for language modeling, which is the first problem that AI must
solve.


I agree that the O-O paradigm is not adequate to model all learning
algorithms and models we use. My own example of recognizing voices should
show that I have doubts that we use O-O models in our brain for everything
of our environment.

I think our brain learns a somewhat a hierarchical model of the world. And
the algorithm for the low level (e.g. voices, sounds) are probably complete
different from the algorithms for higher levels of our models. It is evident
that a child has learning capabilities that are far beyond those from an
adult. 
The reason is not only that the child's brain is nearly empty.
The physiological architecture is different to some degree. So we can expect
that learning the basic low levels of a world model requires algorithms
which we only have had as a child.
And the result of that learning is to some degree used for bias in later
learning algorithm when we are adult.

For example we had to learn to extract syllables from the sound wave of
spoken language. Learning the grammar rules are in higher levels. Learning
semantics is still higher and so on.

But it is a matter of fact that we use an O-O like model in the top-levels
of our world. 
You can see this also from language grammar. Subjects objects, predicates,
adjectives have their counterparts in the O-O paradigm.

A photo of a certain scene is physically an array of colored pixels. But you
can ask a human what he sees. And a possible answer could be:
Well, there is a house. A man walks to the door. It wears a blue shirt. A
woman looks through the window ...

Obviously, the answer shows a lot how people model the world in their
top-level (= conscious)
And obviously the model consists of interacting objects with attributes and
behavior.  
So knowledge representation at higher levels is indeed O-O like.

I think your and my answer show that we do not use a single algorithm which
is responsible to extract all the regularities from our perceptions.

And more important: There is physiological and psychological evidence that
the algorithms we use change to some degree during the first decade of our
life.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting HYPED approach to controlling animated characters...

2008-05-02 Thread Alex J. Champandard
The only thing to learn from here is the way they managed to build hype 
around their technology.  Possibly appropriate here :-)


Technology wise, we're talking Lua state machines, genetic algorithms 
that are manually tweaked for every special case.  The resulting neural 
nets are pretty much only used to drive their active ragdolls towards 
known poses.

http://aigamedev.com/editorial/naturalmotion-euphoria

Even game developer's aren't swallowing the hype on this one!

Best,
Alex


Alex Champandard
Editor  Consultant
AiGameDev.com


Ben Goertzel wrote:

Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...

http://en.wikipedia.org/wiki/Euphoria_%28software%29


ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Mike Tintner

Charles,

We're still a few million miles apart :). But perhaps we can focus on 
something constructive here. On the one hand, while, yes, I'm talking about 
extremely sophisticated behaviour in essaywriting, it has generalizable 
features that characterise all life. (And I think BTW that a dog is still 
extremely sophisticated in its motivations and behaviour -  your idea there 
strikes me as evolutionarily naive).


Even if a student has an extremely dictatorial instructor, following his 
instructions slavishly, will be, when you analyse it, a highly problematic, 
open-ended affair, and no slavish matter - i.e. how he is to apply some 
general, say, deconstructionist criticism instructions and principles and 
translate them into a v. complex essay.


In fact, it immediately strikes me such essaywriting, and all essaywriting, 
and most human activities and animal activities will be a matter of 
hierarchical goals - of, off the cuff, something v. crudely like - write an 
essay on Hamlet - decide general approach...  use deconstructionist 
approach  -  find contradictory values in Hamlet to deconstruct...etc.


But all life, I guess, must be organized along those lines - the simplest 
worm must start with something crudely like : find food to eat...decide 
where food may be located   decide approach to food location  etc.. 
(which in turn will almost always be conflicting with opposed 
emotions/motivations/goals like get some more sleep ..stay cuddled up in 
burrow.. )


And even, pace Koestler and others, v. simple actions, like reaching out for 
food in a kitchen, can be a hierarchical affair, with only the general 
direction and goal decided to begin with, and more specific targeting of arm 
and shaping of hand, only specified at later stages of the action.


Hierarchical goals are surely fundamental to general intelligence.

Interestingly, when I Google hierarchical goals and AI, I get v. little - 
except from our immediate friends, gamers - and this from: Programming Game 
AI by Example Mat Buckland:


Chapter 9: Hierarchical Goal Based Agents

This chapter introduces agents that are motivated by hierarchical goals. 
This type of architecture is far more flexible than the one described in 
Chapter 2 allowing AI programmers to easily imbue game characters with the 
brains necessary to do all sorts of funky stuff.
Discussion, code and demos of: atomic goals, composite goals, goal 
arbitration, creating goal evaluation functions,  implementation in Raven, 
using goal evaluations to create personalities, goals and agent memory, 
automatic resuming of interrupted activities, negotiating special path 
obstacles such as elevators, doors or moving platforms, command queuing, 
scripting behavior.


Anyone care to comment about using hierarchical goals in AGI or elsewhere?



Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep human 
intelligence, but I don't feel that understanding of how human emotions 
are structured is a part of general intelligence except on a very 
strongly superhuman level.  The level where the AI's theory of your mind 
was on a par with, or better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've 
never come across any remarks quite so divorced from psychological 
reality. There are millions of essays out there on Hamlet, each one of 
them different. Why don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those).  In college I formed the definite 
impression that essays on the meaning of literature were exercises in 
determining what the instructor wanted.  This isn't something that I 
consider a part of general intelligence (except as mentioned above).


...
The reason over 70 per cent of students procrastinate when writing essays 
like this about Hamlet, (and the other 20 odd per cent also procrastinate 
but don't tell the surveys), is in part that it is difficult to know 
which of the many available approaches to take, and which of the odd 
thousand lines of text to use as support, and which of innumerable 
critics to read. And people don't have a neat structure for essay-writing 
to follow. (And people are inevitably and correctly afraid that it will 
all take if not forever then far, far too long).
.  This isn't a problem of general intelligence except at a moderately 
superhuman level.  Human tastes aren't reasonable ingredients for an entry 
level general intelligence.  Making it a requirement merely ensures that 
one will never be developed (whose development attends to your theories of 
what's required).


...

In short, essay writing is an excellent example of an AGI in action - a 
mind 

RE: [agi] help me,please for books for agi and mind in pdf

2008-05-02 Thread Derek Zahn
Bruno Frandemiche asked for online AGI-related text.
 
If you're adventurous, I'd recommend the Workshop proceedings from 2006:
 
http://www.agiri.org/wiki/Workshop_Proceedings
 
and the conference proceedings from AGI-08:
 
http://www.agi-08.org/papers

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] upcoming oral at Princeton

2008-05-02 Thread Bob Mottram
My guess would be that this kind of approach will only be partly
successful, since fundamentally it's only based upon an elaborate kind
of 2D template matching.  I think what actually happens is that during
early childhood experience we are able to statistically correlate
certain types of geometry with the patterns of light falling upon out
retinas.  When we later view flat images we're able to retrieve the
associated type of geometry and imagine what the object might look
like from various angles, even if we have only seen it once.  I expect
that biological vision systems are fundamentally designed for 3D
understanding of the world, since this is of high adaptive value,
rather than a sort of 2D screen scraping or the retina.



2008/5/2 J Storrs Hall, PhD [EMAIL PROTECTED]:
 Just saw this announcement go by:

  Abstract:

  Constructing ImageNet

  Data sets are essential in computer vision and content based image retrieval
  research. We
  present the work in progress for constructing ImageNet, a large scale image
  data set based
  on the Princeton WordNet.
  The goal is to associate more than 1000 clean images with each node of
  WordNet, which
  consists of ~30,000 ( estimated ) imagable nodes. We build a prototype system
  for
  constructing ImageNet, as a first step toward large scale deployment. For 
 each
  node of
  WordNet, which is a synonym set (synset) for a single concept, we collect
  candidate images
  from the Internet and clean up them with semi-automatic labeling.  We train
  boosting
  classifiers from human labeled data and use active learning to substantially
  speed up the
  labeling process. We also developed a web interface for massive online human
  labeling. We
  demonstrate the effectiveness of our system with results from a subset of
  synsets.

  Reading list:

  Text book:

  Pattern Recognition and Machine Learning, Christopher M. Bishop, 2006.
  Chapter 1,2,8,14.
  Modern Operating System, Tanenbaum.

  Papers:
  Animals on the Web, Berg, Forsyth, CVPR06
  OPTIMOL: automatic Online Picture collecTion via Incremental MOdel Learning,
  Li, Wang,
  Fei-Fei, CVPR07 Learning Object Categories from Google's image Search, 
 Fergus,
  Fei-Fei,
  Perona, Zissermaman, ICCV05 Harvesting Image Databases from the Web, Scroff,
  Zisserman,
  ICCV07
  From Aardvark to Zorro: A Benchmark of Mammal Images, Fink, Ullman,
  NIPS05
  Tiny Images, Torralba, Fergus, Freeman, TechReport MIT, 2007 Labeling Images
  with a
  Computer Game. Luis von Ahn and Laura Dabbish, CHI04
  LabelMe: a database and web-based tool for image annotation, Russell,
  Torralba, IJCV07
  Introduction to a large scale general purpose groundtruth dataset:
  methodology, annotation tool, and benchmarks, Z.Y. Yao, X. Yang, and S.C. 
 Zhu,
  EMMCVPR07
  Combining active and semi-supervised learning for spoken language
  understanding, Tur,
  Hakkani-Tur, Schapire,  Speech Communication, 05 Online boosting and vision,
  CVPR06

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Panda: a pattern-based programming system

2008-05-02 Thread Brad Paulsen
Readers of these lists might enjoy the refereed paper Overview of the Panda 
Programming System (http://www.jot.fm:80/issues/issue_2008_05/article1/) 
described in the following abstract:


This article provides an overview of a pattern-based programming system, 
named Panda, for automatic generation of high-level programming language 
code. Many code generation systems have been developed [2, 3, 4, 5, 6] that 
are able to generate source code by means of templates, which are defined by 
means of transformation languages such as XSL, ASP, etc. But these templates 
cannot be easily combined because they map parameters and code snippets 
provided by the programmer directly to the target programming language. On 
the contrary, the patterns used in a Panda program generate a code model 
that can be used as input to other patterns, thereby providing an unlimited 
capability of composition. Since such a composition may be split across 
different files or code units, a high degree of separation of concerns [15] 
can be achieved.


A pattern itself can be created by using other patterns, thus making it easy 
to develop new patterns. It is also possible to implement an entire 
programming paradigm, methodology or framework by means of a pattern 
library: design patterns [8], Design by Contract [12], Aspect-Oriented 
Programming [1, 11], multi-dimensional separation of concerns [13, 18], data 
access layer, user interface framework, class templates, etc. This way, 
developing a new programming paradigm does not require to extend an existing 
programming system (compiler, runtime support, etc.), thereby focusing on 
the paradigm concepts.


The Panda programming system introduces a higher abstraction level with 
respect to traditional programming languages: the basic elements of a 
program are no longer classes and methods but, for instance, design patterns 
and crosscutting concerns [1, 11].


Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] upcoming oral at Princeton

2008-05-02 Thread Stephen Reed
Hi Josh,

I briefly looked at the ImageNet description at the Princeton WordNet site.  It 
does not reveal whether the images are open source to the extent this new data 
can be linked and distributed with WordNet, which has a very permissive 
license.   

-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 2, 2008 12:22:40 PM
Subject: [agi] upcoming oral at Princeton

Just saw this announcement go by:

Abstract:

Constructing ImageNet

Data sets are essential in computer vision and content based image retrieval 
research. We
present the work in progress for constructing ImageNet, a large scale image 
data set based
on the Princeton WordNet.
The goal is to associate more than 1000 clean images with each node of 
WordNet, which
consists of ~30,000 ( estimated ) imagable nodes. We build a prototype system 
for
constructing ImageNet, as a first step toward large scale deployment. For each 
node of
WordNet, which is a synonym set (synset) for a single concept, we collect 
candidate images
from the Internet and clean up them with semi-automatic labeling.  We train 
boosting
classifiers from human labeled data and use active learning to substantially 
speed up the
labeling process. We also developed a web interface for massive online human 
labeling. We
demonstrate the effectiveness of our system with results from a subset of 
synsets.

Reading list:

Text book:

Pattern Recognition and Machine Learning, Christopher M. Bishop, 2006.
Chapter 1,2,8,14.
Modern Operating System, Tanenbaum.

Papers:
Animals on the Web, Berg, Forsyth, CVPR06
OPTIMOL: automatic Online Picture collecTion via Incremental MOdel Learning, 
Li, Wang,
Fei-Fei, CVPR07 Learning Object Categories from Google's image Search, Fergus, 
Fei-Fei,
Perona, Zissermaman, ICCV05 Harvesting Image Databases from the Web, Scroff, 
Zisserman,
ICCV07
From Aardvark to Zorro: A Benchmark of Mammal Images, Fink, Ullman, 
NIPS05
Tiny Images, Torralba, Fergus, Freeman, TechReport MIT, 2007 Labeling Images 
with a
Computer Game. Luis von Ahn and Laura Dabbish, CHI04
LabelMe: a database and web-based tool for image annotation, Russell, 
Torralba, IJCV07
Introduction to a large scale general purpose groundtruth dataset:
methodology, annotation tool, and benchmarks, Z.Y. Yao, X. Yang, and S.C. Zhu, 
EMMCVPR07
Combining active and semi-supervised learning for spoken language 
understanding, Tur,
Hakkani-Tur, Schapire,  Speech Communication, 05 Online boosting and vision, 
CVPR06

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re : [agi] help me,please for books for agi and mind in pdf

2008-05-02 Thread Bruno Frandemiche
thank you derek
i was reading all this
bye

- Message d'origine 
De : Derek Zahn [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Vendredi, 2 Mai 2008, 15h18mn 09s
Objet : RE: [agi] help me,please for books for agi and mind in pdf

Bruno Frandemiche asked for online AGI-related text.
 
If you're adventurous, I'd recommend the Workshop proceedings from 2006:
 
http://www.agiri.org/wiki/Workshop_Proceedings
 
and the conference proceedings from AGI-08:
 
http://www.agi-08.org/papers






agi | Archives  | Modify Your Subscription  

__
Do You Yahoo!?
En finir avec le spam? Yahoo! Mail vous offre la meilleure protection possible 
contre les messages non sollicités 
http://mail.yahoo.fr Yahoo! Mail 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Matt Mahoney
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
 
 Actually that's only true in artificial languages.  Children learn
 words with semantic content like ball and milk before they learn
 function words like the and of, in spite of their higher
 frequency.
 
 
 
 Before they learn the words and their meanings they have to learn to
 recognize the sounds for the words. And even if they use words like
 with of and the later they must be able to separate these
 function-words and
 relation-words from object-words before they learn any word.
 But separating words means classifying words and that means knowledge
 of grammar for a certain degree.

Lexical segmentation is learned before semantics, but other grammar is
learned afterwards.  Babies learn to segment continuous speech into
words at 7-10 months [1].  This is before they learn their first word,
but is detectable because babies will turn their heads in preference to
segmentable speech.

It is also possible to guess word divisions in text without spaces
given only a statistical knowledge of letter n-grams [2].

Natural language has a structure that makes it easy to learn
incrementally from examples with a sufficiently powerful neural
network.  It must, because any unlearnable features will disappear.


  Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
 Techniques for parsing artificial languages fail for natural
 languages
 because the parse depends on the meanings of the words, as in the
 following example:
 
 - I ate pizza with pepperoni.
 - I ate pizza with a fork.
 - I ate pizza with a friend.
 
 
 In days of early AI the O-O paradigm was not so sophisticated as it
 is
 today. The  phenomenon of your example is well-known in O-O paradigm
 and is modeled by overwritten functions which means that
 Objects may have several functions with the same name but with
 different signatures.
 
 eat(Food f)
 eat(Food f, ListSideDish l)
 eat (Food f, ListTool l)
 eat (Food f, ListPeople l)
 ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.

 I think, it is clear that there are representations like classes,
 objects, relation between objects, attributes of objects.
 
 But the crucial questions are:
 How did we and do we build our O-O models?
 How created the brain abstract concepts like ball and milk?
 How do we find classes, objects and relations?

We need to understand how children learn grammar without any concept of
what a noun or a verb is.  Also, how do people learn hierarchical
relationships before they learn what a hierarchy is?

1. Jusczyk, Peter W. (1996), Investigations of the word segmentation
abilities of infants, 4'th Intl. Conf. on Speech and Language
Processing, Vol. 3, 1561-1564.

2. http://cs.fit.edu/~mmahoney/dissertation/lex1.html


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Panda: a pattern-based programming system

2008-05-02 Thread Daniel Allen
A thousand thank yous.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Dr. Matthias Heger

 Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

 eat(Food f)
 eat(Food f, ListSideDish l)
 eat (Food f, ListTool l)
 eat (Food f, ListPeople l)
 ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.



As I said, my intention is not to find a set of O-O like rules to create
AGI.
The fact that early approaches failed to build AGI by a set of similar rules
does not prove, that AGI cannot consist of such rules.

For example, there were also approaches to create AI by biological inspired
neural networks with some minor success but there was not the real
breakthrough too.

So this does not prove anything but that the problem of AGI is not so easy
to solve.

The brain is still a black box regarding many phenomenon.

We can analyze our own conscious thoughts and our communication which is
nothing else than sending ideas and thoughts from one brain to the other
brain via natural language.

I am convinced, that the structure and contents of our language is not
independent of the internal representation of knowledge.

And from language we must conclude that there are O-O like models in the
brain because the semantics is O-O.

There might be millions of classes and relationships.
And surely every day or night, the brain refactores parts of its model.

The roadmap to AGI will probably be top-down and not bottom-up.
The bottom-up approach is used by biological evolution.

Creating AGI by software engineering means that we first must know where we
want to go and then how to go there.

Human language and conscious thoughts suggests that AGI must be able to
represent the world O-O like at the top-level.
So this ability is the answer for the question where we want to go.

Again, this does not mean that we must find all the classes and objects. But
we must find an algorithm that generates O-O like models of its environment
based on its perceptions and some bias where the need for the bias can be
proven from reasons of performance.

We can expect that the top-level architecture of AGI is the easiest part in
an AGI project, because the contents of our own consciousness gives us some
hints (but not all) how our own world representation works at the top-level.
And this is O-O in my opinion. There is also a  phenomenon of associations
between patterns (classes). But this is just a question of retrieving
information and attention to relevant parts of the O-O model and is no
contradiction to the existence of the O-O paradigm.

When we go to lower levels, it is clear that difficulties arise.
The reason is that we have no possibility for conscious introspection of the
low levels in our brain. Science gives us hints mainly for the lowest levels
(chemistry, physics...).

So the medium layers of AGI will be the most difficult layers.
By the way this is also often the case in normal software.
In the medium layers there will be base functionalities and the framework
for the top-level. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Re: AW: Language learning

2008-05-02 Thread Matt Mahoney
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 So the medium layers of AGI will be the most difficult layers.

I think if you try to integrate a structured or O-O knowledge base at
the top and a signal processing or neural perceptual/motor system at
the bottom, then you are right.  We can do a thought experiment to
estimate its cost.  Put a human in the middle and ask how much effort
or knowledge is required.  An example would be translating a low-level
natural language question to a high level query in SQL or Cycl or
whatever formal language the KB uses.

I think you can see that for a formal representation of common sense
knowledge, that the skill required for this interface is at a higher
level than the knowledge actually represented at the top level.  If
this knowledge was stored in the human brain, then it could be
retrieved faster, and by someone who had no special skills in
understanding a formal language.

But there are still some applications where this design makes sense. 
One example would be a calculator.  At the low level, you have a
question like how many square inches in a third of an acre?  The
middle level converts this to an equation and punches the numbers into
the top level calculator.  This is preferable to the human doing the
arithmetic.  A database would be another example.

Where it doesn't make sense is when the top level is doing something
that humans are already good at.  It would make more sense to figure
out how humans learn and represent common sense instead of guessing. 
We can do experiments in cognitive psychology.  What can people learn?
remember? perceive?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: AW: Language learning

2008-05-02 Thread Dr. Matthias Heger
I think it is even more complicated. The flow of signals in the brain does
not move only from low levels to high levels.
The modules communicate in both directions. And as far as I know there is
already evidence for this from cognitive science.

If you want to recognize objects in pictures you need to find the edges or
boundaries. But the other direction works too. If you know the object
because someone tells you what is on the picture or because you use other
knowledge about the picture then it is easier for you to detect the edges of
the object.

A thought experiment is a good idea.
Let's say we have a robot in the garden and ask him:
How many apples are on the tree?

The robot is assumed to be experienced, i.e. it should have a sufficient
world model to understand and answer the question.

I make this assumption at this point, because first we have to answer the
question where we want to go. In the following I describe a hypothetical
process in the robot's brain. Note that I assume the robot has learned most
of this process (classes, interactions of objects) with past experiences.
But of course some classes and information flows it must have had from its
first day on.

Ok. The robot gets the sound wave and its low level modules try to recognize
known patterns in this wave.

First it recognizes a voice pattern.

This triggers a voice object. This triggers different objects. For example:
A speech object, an information object, a person object and perhaps a lot of
other objects.


The person object analyzes the sound wave only to obtain information who is
speaking. The speech object only tries to figure out what language is
spoken. But here is already a trick. The person object detects that the
voice comes from person Matt. And the person object has the value English
in its attribute language. The objects inform each other in parallel about
their values and the speech object receives the value English from the
person object. By this it is easier for the speech object to recognize the
language because it can use a useful hypothesis and it will activate certain
English tester objects. All these objects make their own analysis and use
information about results of other objects. 

After a short time, certain important objects are active:

A question object of the type quantity question.
Word objects of different grammar types with values 
How
Many
Apples
APPLIES
Are
On 
The
Tree

There is something special with the words APPLES and APPLIES.
They have the same number attribute value (=third word in the question) and
they have a probability value of 50%.
This means that the robot is not quite sure whether the third word was
APPLES or APPLIES.

The question object is already a higher level object. It does not use the
sound wave input but the set of active word objects.

The question object contains a subject object which itself contains a
GrammarSubject object and a GivenHints object. It has to decide whether the
subject is APPLES or TREE. 
The robot knows from past experience that subjects of quantity questions are
in plural. For any attribute of any object there is a setter method with a
learnable validate function. So the subject object accepts only the word
APPLES for its GRammarSubject object.

This fact also increases the probability value of the word APPLES and
decreases the probability for APPLIES.

Finally the robot has the complete question object which activates a goal
object: Answer the question!

This was just the low level. At this point the robot must understand what he
really shall do.

He knows from experience that he gets reward if it answers the active
question object whenever a corresponding goal object is active.

An answer for a quantity question must be a number.
The number is the result of a count process which corresponds to  the
subject of the quantity question.

Ok. We are in one of the medium levels of AGI. And I already wonder how our
robot should have learned the low level I described so far.
And I stop here because everything is too complex now.

But these thought experiments are strongly necessary if we want to create
AGI 






-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 3. Mai 2008 01:27
An: agi@v2.listbox.com
Betreff: [agi] Re: AW: Language learning

--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 So the medium layers of AGI will be the most difficult layers.

I think if you try to integrate a structured or O-O knowledge base at
the top and a signal processing or neural perceptual/motor system at
the bottom, then you are right.  We can do a thought experiment to
estimate its cost.  Put a human in the middle and ask how much effort
or knowledge is required.  An example would be translating a low-level
natural language question to a high level query in SQL or Cycl or
whatever formal language the KB uses.

I think you can see that for a formal representation of common sense
knowledge, that the skill required for this