Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Charles D Hixson

Mike Tintner wrote:


Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep 
human intelligence, but I don't feel that understanding of how human 
emotions are structured is a part of general intelligence except on a 
very strongly superhuman level.  The level where the AI's theory of 
your mind was on a par with, or better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've 
never come across any remarks quite so divorced from psychological 
reality. There are millions of essays out there on Hamlet, each one of 
them different. Why don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those).  In college I formed the 
definite impression that essays on the meaning of literature were 
exercises in determining what the instructor wanted.  This isn't 
something that I consider a part of general intelligence (except as 
mentioned above).


...
The reason over 70 per cent of students procrastinate when writing 
essays like this about Hamlet, (and the other 20 odd per cent also 
procrastinate but don't tell the surveys), is in part that it is 
difficult to know which of the many available approaches to take, and 
which of the odd thousand lines of text to use as support, and which 
of innumerable critics to read. And people don't have a neat structure 
for essay-writing to follow. (And people are inevitably and correctly 
afraid that it will all take if not forever then far, far too long).
The problem is that most, or at least many, of the approaches are 
defensible, but your grade will be determined by the taste of the 
instructor.  This isn't a problem of general intelligence except at a 
moderately superhuman level.  Human tastes aren't reasonable ingredients 
for an entry level general intelligence.  Making it a requirement merely 
ensures that one will never be developed (whose development attends to 
your theories of what's required).


...

In short, essay writing is an excellent example of an AGI in action - 
a mind freely crossing different domains to approach a given subject 
from many fundamentally different angles.   (If any subject tends 
towards narrow AI, it is normal as opposed to creative maths).
I can see story construction as a reasonable goal for an AGI, but at the 
entry level they are going to need to be extremely simple stories.  
Remember that the goal structures of the AI won't match yours, so only 
places where the overlap is maximal are reasonable grounds for story 
construction.  Otherwise this is an area for specialized AIs, which 
isn't what we are after.


Essay writing also epitomises the NORMAL operation of the human mind. 
When was the last time you tried to - or succeeded in concentrating 
for any length of time?
I have frequently written essays and other similar works.  My goal 
structures, however, are not generalized, but rather are human.  I have 
built into me many special purpose functions for dealing with things 
like plot structure, family relationships, relative stages of growth, etc. 


As William James wrote of the normal stream of consciousness:

Instead of thoughts of concrete things patiently following one 
another in a beaten track of habitual suggestion, we have the most 
abrupt cross-cuts and transitions from one idea to another, the most 
rarefied abstractions and discriminations, the most unheard-of 
combinations of elements, the subtlest associations of analogy; in a 
word, we seem suddenly introduced into a seething caldron of ideas, 
where everything is fizzling and bobbing about in a state of 
bewildering activity, where partnerships can be joined or loosened in 
an instant, treadmill routine is unknown, and the unexpected seems the 
only law.


Ditto:

The normal condition of  the mind is one of informational disorder: 
random thoughts chase one another instead of lining up in logical 
causal sequences.

Mihaly Csikszentmihalyi

Ditto the Dhammapada,  Hard to control,  unstable is the  mind, ever 
in quest of delight,


When you have a mechanical mind that can a) write essays or tell 
stories or hold conversations  [which all present the same basic 
difficulties] and b) has a fraction of the difficulty concentrating 
that the brain does and therefore c) a fraction of the flexibility in 
crossing domains, then you might have something that actually is an AGI.


You seem to be placing an extremely high bar in place before you will 
consider something an AGI.  Accepting all that you have said, for an AGI 
to react as a human would react would require that the AGI be strongly 
superhuman.


More to the point, I wouldn't DARE create an AGI which had motivations 
similar to 

AW: AW: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Dr. Matthias Heger


Matt Mahoney [mailto:[EMAIL PROTECTED] wrote


Object oriented programming is good for organizing software but I don't
think for organizing human knowledge.  It is a very rough
approximation.  We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does not scale well and
does not allow for incremental learning from examples.  It totally does
not work for language modeling, which is the first problem that AI must
solve.


I agree that the O-O paradigm is not adequate to model all learning
algorithms and models we use. My own example of recognizing voices should
show that I have doubts that we use O-O models in our brain for everything
of our environment.

I think our brain learns a somewhat a hierarchical model of the world. And
the algorithm for the low level (e.g. voices, sounds) are probably complete
different from the algorithms for higher levels of our models. It is evident
that a child has learning capabilities that are far beyond those from an
adult. 
The reason is not only that the child's brain is nearly empty.
The physiological architecture is different to some degree. So we can expect
that learning the basic low levels of a world model requires algorithms
which we only have had as a child.
And the result of that learning is to some degree used for bias in later
learning algorithm when we are adult.

For example we had to learn to extract syllables from the sound wave of
spoken language. Learning the grammar rules are in higher levels. Learning
semantics is still higher and so on.

But it is a matter of fact that we use an O-O like model in the top-levels
of our world. 
You can see this also from language grammar. Subjects objects, predicates,
adjectives have their counterparts in the O-O paradigm.

A photo of a certain scene is physically an array of colored pixels. But you
can ask a human what he sees. And a possible answer could be:
Well, there is a house. A man walks to the door. It wears a blue shirt. A
woman looks through the window ...

Obviously, the answer shows a lot how people model the world in their
top-level (= conscious)
And obviously the model consists of interacting objects with attributes and
behavior.  
So knowledge representation at higher levels is indeed O-O like.

I think your and my answer show that we do not use a single algorithm which
is responsible to extract all the regularities from our perceptions.

And more important: There is physiological and psychological evidence that
the algorithms we use change to some degree during the first decade of our
life.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Mike Tintner

Charles,

We're still a few million miles apart :). But perhaps we can focus on 
something constructive here. On the one hand, while, yes, I'm talking about 
extremely sophisticated behaviour in essaywriting, it has generalizable 
features that characterise all life. (And I think BTW that a dog is still 
extremely sophisticated in its motivations and behaviour -  your idea there 
strikes me as evolutionarily naive).


Even if a student has an extremely dictatorial instructor, following his 
instructions slavishly, will be, when you analyse it, a highly problematic, 
open-ended affair, and no slavish matter - i.e. how he is to apply some 
general, say, deconstructionist criticism instructions and principles and 
translate them into a v. complex essay.


In fact, it immediately strikes me such essaywriting, and all essaywriting, 
and most human activities and animal activities will be a matter of 
hierarchical goals - of, off the cuff, something v. crudely like - write an 
essay on Hamlet - decide general approach...  use deconstructionist 
approach  -  find contradictory values in Hamlet to deconstruct...etc.


But all life, I guess, must be organized along those lines - the simplest 
worm must start with something crudely like : find food to eat...decide 
where food may be located   decide approach to food location  etc.. 
(which in turn will almost always be conflicting with opposed 
emotions/motivations/goals like get some more sleep ..stay cuddled up in 
burrow.. )


And even, pace Koestler and others, v. simple actions, like reaching out for 
food in a kitchen, can be a hierarchical affair, with only the general 
direction and goal decided to begin with, and more specific targeting of arm 
and shaping of hand, only specified at later stages of the action.


Hierarchical goals are surely fundamental to general intelligence.

Interestingly, when I Google hierarchical goals and AI, I get v. little - 
except from our immediate friends, gamers - and this from: Programming Game 
AI by Example Mat Buckland:


Chapter 9: Hierarchical Goal Based Agents

This chapter introduces agents that are motivated by hierarchical goals. 
This type of architecture is far more flexible than the one described in 
Chapter 2 allowing AI programmers to easily imbue game characters with the 
brains necessary to do all sorts of funky stuff.
Discussion, code and demos of: atomic goals, composite goals, goal 
arbitration, creating goal evaluation functions,  implementation in Raven, 
using goal evaluations to create personalities, goals and agent memory, 
automatic resuming of interrupted activities, negotiating special path 
obstacles such as elevators, doors or moving platforms, command queuing, 
scripting behavior.


Anyone care to comment about using hierarchical goals in AGI or elsewhere?



Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep human 
intelligence, but I don't feel that understanding of how human emotions 
are structured is a part of general intelligence except on a very 
strongly superhuman level.  The level where the AI's theory of your mind 
was on a par with, or better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've 
never come across any remarks quite so divorced from psychological 
reality. There are millions of essays out there on Hamlet, each one of 
them different. Why don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those).  In college I formed the definite 
impression that essays on the meaning of literature were exercises in 
determining what the instructor wanted.  This isn't something that I 
consider a part of general intelligence (except as mentioned above).


...
The reason over 70 per cent of students procrastinate when writing essays 
like this about Hamlet, (and the other 20 odd per cent also procrastinate 
but don't tell the surveys), is in part that it is difficult to know 
which of the many available approaches to take, and which of the odd 
thousand lines of text to use as support, and which of innumerable 
critics to read. And people don't have a neat structure for essay-writing 
to follow. (And people are inevitably and correctly afraid that it will 
all take if not forever then far, far too long).
.  This isn't a problem of general intelligence except at a moderately 
superhuman level.  Human tastes aren't reasonable ingredients for an entry 
level general intelligence.  Making it a requirement merely ensures that 
one will never be developed (whose development attends to your theories of 
what's required).


...

In short, essay writing is an excellent example of an AGI in action - a 
mind 

Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Matt Mahoney
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
 
 Actually that's only true in artificial languages.  Children learn
 words with semantic content like ball and milk before they learn
 function words like the and of, in spite of their higher
 frequency.
 
 
 
 Before they learn the words and their meanings they have to learn to
 recognize the sounds for the words. And even if they use words like
 with of and the later they must be able to separate these
 function-words and
 relation-words from object-words before they learn any word.
 But separating words means classifying words and that means knowledge
 of grammar for a certain degree.

Lexical segmentation is learned before semantics, but other grammar is
learned afterwards.  Babies learn to segment continuous speech into
words at 7-10 months [1].  This is before they learn their first word,
but is detectable because babies will turn their heads in preference to
segmentable speech.

It is also possible to guess word divisions in text without spaces
given only a statistical knowledge of letter n-grams [2].

Natural language has a structure that makes it easy to learn
incrementally from examples with a sufficiently powerful neural
network.  It must, because any unlearnable features will disappear.


  Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
 Techniques for parsing artificial languages fail for natural
 languages
 because the parse depends on the meanings of the words, as in the
 following example:
 
 - I ate pizza with pepperoni.
 - I ate pizza with a fork.
 - I ate pizza with a friend.
 
 
 In days of early AI the O-O paradigm was not so sophisticated as it
 is
 today. The  phenomenon of your example is well-known in O-O paradigm
 and is modeled by overwritten functions which means that
 Objects may have several functions with the same name but with
 different signatures.
 
 eat(Food f)
 eat(Food f, ListSideDish l)
 eat (Food f, ListTool l)
 eat (Food f, ListPeople l)
 ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.

 I think, it is clear that there are representations like classes,
 objects, relation between objects, attributes of objects.
 
 But the crucial questions are:
 How did we and do we build our O-O models?
 How created the brain abstract concepts like ball and milk?
 How do we find classes, objects and relations?

We need to understand how children learn grammar without any concept of
what a noun or a verb is.  Also, how do people learn hierarchical
relationships before they learn what a hierarchy is?

1. Jusczyk, Peter W. (1996), Investigations of the word segmentation
abilities of infants, 4'th Intl. Conf. on Speech and Language
Processing, Vol. 3, 1561-1564.

2. http://cs.fit.edu/~mmahoney/dissertation/lex1.html


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Dr. Matthias Heger

 Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

 eat(Food f)
 eat(Food f, ListSideDish l)
 eat (Food f, ListTool l)
 eat (Food f, ListPeople l)
 ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.



As I said, my intention is not to find a set of O-O like rules to create
AGI.
The fact that early approaches failed to build AGI by a set of similar rules
does not prove, that AGI cannot consist of such rules.

For example, there were also approaches to create AI by biological inspired
neural networks with some minor success but there was not the real
breakthrough too.

So this does not prove anything but that the problem of AGI is not so easy
to solve.

The brain is still a black box regarding many phenomenon.

We can analyze our own conscious thoughts and our communication which is
nothing else than sending ideas and thoughts from one brain to the other
brain via natural language.

I am convinced, that the structure and contents of our language is not
independent of the internal representation of knowledge.

And from language we must conclude that there are O-O like models in the
brain because the semantics is O-O.

There might be millions of classes and relationships.
And surely every day or night, the brain refactores parts of its model.

The roadmap to AGI will probably be top-down and not bottom-up.
The bottom-up approach is used by biological evolution.

Creating AGI by software engineering means that we first must know where we
want to go and then how to go there.

Human language and conscious thoughts suggests that AGI must be able to
represent the world O-O like at the top-level.
So this ability is the answer for the question where we want to go.

Again, this does not mean that we must find all the classes and objects. But
we must find an algorithm that generates O-O like models of its environment
based on its perceptions and some bias where the need for the bias can be
proven from reasons of performance.

We can expect that the top-level architecture of AGI is the easiest part in
an AGI project, because the contents of our own consciousness gives us some
hints (but not all) how our own world representation works at the top-level.
And this is O-O in my opinion. There is also a  phenomenon of associations
between patterns (classes). But this is just a question of retrieving
information and attention to relevant parts of the O-O model and is no
contradiction to the existence of the O-O paradigm.

When we go to lower levels, it is clear that difficulties arise.
The reason is that we have no possibility for conscious introspection of the
low levels in our brain. Science gives us hints mainly for the lowest levels
(chemistry, physics...).

So the medium layers of AGI will be the most difficult layers.
By the way this is also often the case in normal software.
In the medium layers there will be base functionalities and the framework
for the top-level. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Charles D Hixson

Dr. Matthias Heger wrote:

Performance not an unimportant question. I assume that AGI has necessarily
has costs which grow exponentially with the number of states and actions so
that AGI will always be interesting only for toy domains.

My assumption is that human intelligence is not truly general intelligence
and therefore cannot hold as a proof of existence that
AGI is possible. Perhaps we see more intelligence than  there really is.
Perhaps the human intelligence is to some extend overestimated and an
illusion as the free will.

Why? In truly general domains every experience of an agent only can be used
for the single certain state and action when the experience was made. Every
time when your algorithm makes generalizations from known state-action pairs
to unknown state-action pairs then this is in fact usage of knowledge about
the underlying state-action space or it is just guessing and only a matter
of luck.

So truly general AGI algorithms must visit every state-action pair at least
once to learn what to do in what state.
Even in small real world domains the state spaces are so big that it would
take longer than the age of the universe to go through all states. 


For this reason true AGI is impossible and human intelligence must be narrow
to a certain degree. 



  
I would assert a few things that appear to contradict your assumptions 
(and a few that suppport them).
1)  AGIs will reach conclusions that are not guaranteed to be correct.  
This allows somewhat lossy compression of the input data.
2) AGIs can exist, but will operate in modes.  In AGI mode they will be 
very expensive and slow.  And still be error prone.
3) Humans do have an AGI mode.  Probably more than one of them.  But 
it's so expensive to use and so slow that they strive diligently to 
avoid using it, preferring to rely on simple situation-based models (and 
discarding most of the input data while doing so).
4) When humans are operating in AGI mode, they are not considering or 
using ANY real-time data (except to hold and replay notes).  The process 
is too slow.


The two AGI modes that I believe people use are 1) mathematics and 2) 
experiment.  Note that both operate in restricted domains, but within 
those domains they *are* general.  (E.g., mathematics cannot generate 
it's own axioms, postulates, and rules of inference, but given them it 
is general.)  Because of the restricted domains, many problems can't 
even be addressed by either of them, so I suspect the presence of other 
AGI modes.  Possibly even slower and more expensive to use.


I suppose that one could quibble that since the modes I have identified 
are restricted to particular domains, that they aren't *general* 
intelligence modes, but as far as I can tell ALL modes of human thought 
only operate within restricted domains.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Mike Tintner

Charles: as far as I can tell ALL modes of human thought

only operate within restricted domains.


I literally can't conceive where you got this idea from :). Writing an 
essay - about, say,  the French Revolution, future of AGI, flaws in Hamlet, 
what you did in the zoo, or any of the other many subject areas of the 
curriculum - which accounts for, at a very rough estimate, some 50% of 
problemsolving within education, operates within *which* restricted domain? 
(And how *did* you arrive at the above idea?)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Charles D Hixson

Mike Tintner wrote:

Charles: as far as I can tell ALL modes of human thought

only operate within restricted domains.


I literally can't conceive where you got this idea from :). Writing an 
essay - about, say,  the French Revolution, future of AGI, flaws in 
Hamlet, what you did in the zoo, or any of the other many subject 
areas of the curriculum - which accounts for, at a very rough 
estimate, some 50% of problemsolving within education, operates within 
*which* restricted domain? (And how *did* you arrive at the above idea?)
Yes, I think of those as being handled largely by specialized, 
non-general, mechanisms.  I suppose that to an extent you could say that 
it's done via pattern matching, and to that extent it falls under the 
same model that I've called experimentation.  Mainly, though, that's 
done with specialized language manipulation routines.  (I'm not 
asserting that they are hard-wired.  They were built up via lots of time 
and effort put in via both experimentation and mathematics [in which I 
include modeling and statistical prediction]).


Mathematics and experimentation are extremely broad brushes.  That's a 
part of why they are so slow. 

French revolution:  Learning your history from a teacher or a text isn't 
a general pattern.  It's a short-cut that usually works pretty well.  
Now if you were talking about going on the ground and doing personal 
research...then it might count as general intelligence under the 
category of experimentation.  (Note that both mathematics and 
experimentation are generally necessary to creat new knowledge, rather 
that copying knowledge from some source that has previously acquired and 
processed it.)


Future of AGI:  Creating the future of AGI does, indeed, involve general 
intelligence.  If you follow this list you'll

note that it involves BOTH mathematics and experimentation.

Flaws in Hamlet:  I don't think of this as involving general 
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep human 
intelligence, but I don't feel that understanding of how human emotions 
are structured is a part of general intelligence except on a very 
strongly superhuman level.  The level where the AI's theory of your mind 
was on a par with, or better than, your own.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Dr. Matthias Heger


Charles D Hixson [mailto:[EMAIL PROTECTED] 




The two AGI modes that I believe people use are 1) mathematics and 2) 
experiment.  Note that both operate in restricted domains, but within 
those domains they *are* general.  (E.g., mathematics cannot generate 
it's own axioms, postulates, and rules of inference, but given them it 
is general.)  Because of the restricted domains, many problems can't 
even be addressed by either of them, so I suspect the presence of other 
AGI modes.  Possibly even slower and more expensive to use.

I suppose that one could quibble that since the modes I have identified 
are restricted to particular domains, that they aren't *general* 
intelligence modes, but as far as I can tell ALL modes of human thought 
only operate within restricted domains.




AGI which only operates in restricted domains is no AGI as I understand it
But it seems to be, that I use this term in a much stronger sense than most 
other people. I assume that most people understand AGI as human-like
intelligence which refers
especially to the repertoire of tasks which can be solved.

As I said, 'true AGI' does not use any bias. But any powerful intelligence
must use bias because 
real world state spaces are too complex for 'true AGI'.  So AGI as it is
used commonly is
only approximated and limited AGI but of course much broader than AI of the
present and the past.

I follow some other people that human-like AGI will probably be build by
several narrow AI algorithms that work together.

Humans uses and create object oriented descriptions of the world similar to
the paradigms of object oriented programming
Languages. This paradigm is very powerful in many domains because the inner
structure of these domains are in fact object-oriented. But this does not
hold in all domains. Among other advantages the object oriented paradigm
helps humans to make useful generalizations. For example: A television is a
electric appliance. Electric appliances need electric energy. So if there
will be some new electric appliance in the future I already know that it
will only work if it gets electric energy.

On the other hand, the object oriented paradigm is poor for recognition of
sounds and voices. 
We can hardly describe the voice of a person as a set of classes and objects
which have some properties, behavior and interact with each other. So the
object-oriented paradigm is an example for a very general paradigm but which
is not 100% useful in all domains.

And the brain has probably not a general monolithic algorithm which finds
regularities at all levels.
The recognition of regularities in sounds is surely not solved by the same
algorithm which learns that houses have windows.
The brain even changes its architecture to some degree during lifetime. A
baby brain has far more synapses than an adult brain. I think during the
first years humans extend their bias which they have from your genes. When
they are older, humans can solve many problems but they rely on the bias
they obtained during childhood and from their genes.

By the way, there is a nice analogy between the brain and the universe:
You can't explore your past processes of your own mind of the very first day
of your life because your brain changed its inner structure in the first
years too much and the algorithm for object oriented patterns  probably did
not yet exist at 
that time.
The whole universe also will not be able explore its past processes of the
big bang. At that time the inner structure changed. There were no atoms,
light was scattered always and everywhere. Therefore, we and any possible
machine of the universe can only see events  some 10 years after the big
bang when there were already atoms.  





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Matt Mahoney
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Humans uses and create object oriented descriptions of the world
 similar to the paradigms of object oriented programming

Object oriented programming is good for organizing software but I don't
think for organizing human knowledge.  It is a very rough
approximation.  We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does not scale well and
does not allow for incremental learning from examples.  It totally does
not work for language modeling, which is the first problem that AI must
solve.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Mike Tintner


Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular deviation 
from iambic pentameter was a flaw would require a deep human intelligence, 
but I don't feel that understanding of how human emotions are structured 
is a part of general intelligence except on a very strongly superhuman 
level.  The level where the AI's theory of your mind was on a par with, or 
better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've never 
come across any remarks quite so divorced from psychological reality. There 
are millions of essays out there on Hamlet, each one of them different. Why 
don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet

There are also probably many thousands of critical essays and books on 
Hamlet, (and may well have been a million written since SHakespeare's 
time)..


The reason people are able to write so many essays is that when you have to 
write on say whether Hamlet is a tragically flawed hero, you can choose to 
approach this play (and indeed virtually every other play) from many 
different angles and domains - tragic theory, psychological - Oedipal 
complex/ youthful identity crisis, political - young man caught up in 
corrupt state, moral, Elizabethan dilemma of horror of regicide vs loathing 
of tyranny, conflict between Hamlet the intellectual and the man of action, 
inferiority complex in relation to Fortinbras, sexist deconstruction of the 
suppression of Ophelia, use of poetic imagery and metaphor,  stifling 
self-awareness,  and on and on and on


The reason over 70 per cent of students procrastinate when writing essays 
like this about Hamlet, (and the other 20 odd per cent also procrastinate 
but don't tell the surveys), is in part that it is difficult to know which 
of the many available approaches to take, and which of the odd thousand 
lines of text to use as support, and which of innumerable critics to read. 
And people don't have a neat structure for essay-writing to follow. (And 
people are inevitably and correctly afraid that it will all take if not 
forever then far, far too long).


This is also the reason why a major percentage of students have difficulty 
writing an ordered essay and presenting a coherent argument -  their essays 
tend to be cluttered with too many different themes, and keep going off at 
tangents.


In short, essay writing is an excellent example of an AGI in action - a mind 
freely crossing different domains to approach a given subject from many 
fundamentally different angles.   (If any subject tends towards narrow AI, 
it is normal as opposed to creative maths).


Essay writing also epitomises the NORMAL operation of the human mind. When 
was the last time you tried to - or succeeded in concentrating for any 
length of time?


As William James wrote of the normal stream of consciousness:

Instead of thoughts of concrete things patiently following one another in a 
beaten track of habitual suggestion, we have the most abrupt cross-cuts and 
transitions from one idea to another, the most rarefied abstractions and 
discriminations, the most unheard-of combinations of elements, the subtlest 
associations of analogy; in a word, we seem suddenly introduced into a 
seething caldron of ideas, where everything is fizzling and bobbing about in 
a state of bewildering activity, where partnerships can be joined or 
loosened in an instant, treadmill routine is unknown, and the unexpected 
seems the only law.


Ditto:

The normal condition of  the mind is one of informational disorder: random 
thoughts chase one another instead of lining up in logical causal sequences.

Mihaly Csikszentmihalyi

Ditto the Dhammapada,  Hard to control,  unstable is the  mind, ever in 
quest of delight,


When you have a mechanical mind that can a) write essays or tell stories or 
hold conversations  [which all present the same basic difficulties] and b) 
has a fraction of the difficulty concentrating that the brain does and 
therefore c) a fraction of the flexibility in crossing domains, then you 
might have something that actually is an AGI.






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-29 Thread Stan Nilsen

Mike,

I derived a few things from your response - even enjoyed it.  One point 
passed over too quickly was the question of How knowable is the world?


I take this to be a rhetorical question meant to suggest that we need 
all of it to be considered intelligent. This suggestion seems to be 
echoed in the statement Which brings us to HOW MANY KINDS OF 
REPRESENTATIONS OF A SUBJECT.. do we need to form a comprehensive 
representation of the man - 


If the implication is that we need it all, the bar is too high - 
unnecessarily high.


1.  We are not building God.  The AGI does not need to grasp everything 
there is to Ben.  It doesn't have to conquer the stock market or 
dominate the institutions of man.  It need not have full recall of all 
important historical events. It need not prefer a philosophy or know the 
relative value of all things.


2.  We are building a machine that performs - intelligent behavior.  And 
because it is to have general intelligent behavior, it must grow.  The 
growth will be done by adopting new xyz? (whatever it finds useful...) 
For example, as Steve Reed increases the conversational ability of a 
machine, he will be giving more capability to the unit.  With this new 
capability the unit will have more choices.  It will be able to 
function in an environment where intelligence can be developed / 
harvested and tested.  Who says it won't grow from the advice of those 
it converses with?


3.  Confusion is amplified when there is no distinction between what it 
is to be intelligent and what it is to be super intelligent. Why make it 
more difficult than it already is?  Why ask the fledgling performer to 
do what is way beyond it's capacity at inception?


4.  If the big deal is will an AGI ever use images? We know that they 
will.  If the question is can they have human comprehension of images? 
It isn't too much of a stretch to say yeah, probably.  As humans we 
have rich comprehension of many things.  Then again, I know many people 
who think a snake is a snake and that's all they need to know.


5.  Minimal system is a target.  In the sense of minimal system, I view 
AGI as a narrow problem.  What is the essence of intelligence?  - what's 
required to see intelligent behavior? (qualified to include the broader 
sense of general intelligence, that is including the growth factors.)


Mike, are you saying that there is no such thing as a minimal system?

6. The problem of THIS minimal system is that it is complicated.  A few 
techniques and methods won't do - else such a system exhibiting general 
intelligent behavior would exist and be growing today.


My point - there will continue to be *misunderstanding* if intelligence 
is viewed without distinguishing mature from fledgling.


I'm interested in the minimal system.  I consider it my good fortune 
to have a good seat to observe historic events - I appreciate the 
project, this list, and it's contributors.





Mike Tintner wrote:

Matthias: a state  description could be:
..I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.
.. 


I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.



You did something v. interesting, which is you started to ground the
discussion about general intelligence. These discussions are normally 
almost

totally ungrounded as to the SUBJECTS/OBJECTS  of intelligence.

Essentially, the underlying perspectives of discussions of GI in this 
field are computational and mathematical re the MEDIUM of intelligence. 
People basically think along the lines of: how much information can a 
computer hold, and how can it manipulate that information? But that - 
the equivalent would be something like: how much can a human brain hold 
and manipulate? - is not all there is to intelligence.


What is totally missing is a philosophical and semiotic perspective. A 
philosopher looks at things v. differently and asks essentially :  how 
much information can we get about a given subject (and the world 
generally)? A semioticist asks: how much and what kinds of information 
about any given subject (or the world generally) can different forms of 
representation give us?  (A verbal description, photo, movie, statue 
will all give us different forms of info and show different dimensions 
of a subject).



The AI-er asks how much information (about the world) can I and my 
machine handle? The philosopher: how much information about the world 
can we actually *get*? - How knowable is the world? ANd what do we have 
to do to get and present knowledge about the world?


If you are 

Re: [agi] How general can be and should be AGI?

2008-04-29 Thread Mike Tintner

Stan

I'm putting together a detailed paper on this, so overall it will be best to 
wait for that.


My posts today give the barest beginning to my thinking, which is that you 
start to understand the semiotic requirements for a general intelligence by 
thinking about the *things* that it must know about, and then look at the 
dimensions of things that different sign systems - 
maths/logic/language/schemas/ still images/ dynamic images - *allow* you to 
see.


AGI-ers and indeed most of our culture still think pre-semiotically, and 
aren't aware that every sign system we use is like a different set of 
spectacles, and focusses on certain dimensions and problems of things, but 
totally excludes others.


My focus is not so much on the different stages of general intelligence - an 
evolutionary perspective - although I do think about that. Ironically, it is 
people who want their AGI's to converse straight away, or variously handle 
language, who are actually starting in a sense at the godlike, super - human 
end.


It actually takes human intelligence many developmental steps to proceed 
from being able to process simple, highly specific, concrete, here-and-now 
this-and-that words for objects and people  in immediate scenes, to being 
able to think in language about vast superclasses of things and creatures 
spread out over zillions of scenes and billions of years past and future. 
The idea that you can process, say,  the history of the universe is a hard 
subject to think about, with the same single- or simple-level processing as 
it's hard to see where the key is in this room is an absurd illusion. 
Similarly, the idea that you can process all numbers and mathematical 
entities with the same ease is absurd. It took a long time historically for 
mathematicians to even dare to think about infinity - a taboo subject 
until the printing press made it something that could be in part concretely 
imagined.


I'll try then to put out a paper on the semiotics - the bare minimum 
requirements in terms of sign systems - that I think essential to solve the 
main problems of AGI, and why, shortly.


But re your underlying question -  I don't know how tough it will all be. My 
personal preference is that, as s.o. else just suggested, you guys should 
link up with some of the roboticists - you both seem to need and complement 
each other in some ways. But however tough it is, one thing's for sure - it 
won't do any good to pretend it's easier. AGI will just keep banging its 
head into brick walls, like it has done for over 50 years. The shorter your 
cuts, the longer it will actually take.




- Original Message - 
From: Stan Nilsen [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, April 29, 2008 6:52 PM
Subject: Re: [agi] How general can be and should be AGI?



Mike,

I derived a few things from your response - even enjoyed it.  One point 
passed over too quickly was the question of How knowable is the world?


I take this to be a rhetorical question meant to suggest that we need all 
of it to be considered intelligent. This suggestion seems to be echoed in 
the statement Which brings us to HOW MANY KINDS OF REPRESENTATIONS OF A 
SUBJECT.. do we need to form a comprehensive representation of the man - 


If the implication is that we need it all, the bar is too high - 
unnecessarily high.


1.  We are not building God.  The AGI does not need to grasp everything 
there is to Ben.  It doesn't have to conquer the stock market or dominate 
the institutions of man.  It need not have full recall of all important 
historical events. It need not prefer a philosophy or know the relative 
value of all things.


2.  We are building a machine that performs - intelligent behavior.  And 
because it is to have general intelligent behavior, it must grow.  The 
growth will be done by adopting new xyz? (whatever it finds useful...) For 
example, as Steve Reed increases the conversational ability of a machine, 
he will be giving more capability to the unit.  With this new capability 
the unit will have more choices.  It will be able to function in an 
environment where intelligence can be developed / harvested and tested. 
Who says it won't grow from the advice of those it converses with?


3.  Confusion is amplified when there is no distinction between what it is 
to be intelligent and what it is to be super intelligent. Why make it more 
difficult than it already is?  Why ask the fledgling performer to do what 
is way beyond it's capacity at inception?


4.  If the big deal is will an AGI ever use images? We know that they 
will.  If the question is can they have human comprehension of images? It 
isn't too much of a stretch to say yeah, probably.  As humans we have 
rich comprehension of many things.  Then again, I know many people who 
think a snake is a snake and that's all they need to know.


5.  Minimal system is a target.  In the sense of minimal system, I view 
AGI as a narrow problem.  What is the essence

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger

 Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54

 Yes, truly general AI is only possible in the case of infinite
 processing power, which is
 likely not physically realizable.   
 How much generality can be achieved with how much
 Processing power, is not yet known -- math hasn't advanced that far yet.


My point is not only that  'general intelligence without any limits' would
need infinite resources of time and memory.
This is trivial of course. What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of useful
intelligence. 

Let me explain this point in more detail:

By useful and powerful intelligence I mean algorithms that do not need
resources which grow exponentially with state and action space.

Let's take the credit assignment problem of reinforcement learning.
The agent has several sensor inputs which builds the perceived state pace of
its environment.
So if the algorithm is truly general the state space grows exponentially
with the number of sensor inputs
and the number of time steps it considers of the past. Every pixel of the
eyes retina is a part of the 
state description if you are truly general. And every tiny detail of the
past may be important if you are
truly general. 
And even if you are less general and describe your environment not by pixels
but by words of common language
the state space is huge. 
For example, a state  description could be:

 ...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc. etc.
...

Even this far less general state description would fill pages. 

So an AGI agent acts in huge state spaces and huge action spaces. It has
always to solve the credit assignment problem: Which action in which state
is responsible for the current outcome in the current situation. And which
action in which state will give me the best outcome? A truly general AI
algorithm without much predefined domain-knowledge and suitable
for arbitrary state spaces will have to explore the complete state-action
space which as I said grows exponentially with sensor inputs and time. 

I think, every useful intelligence algorithm must always avoid the pitfall
of exponential costs and the only way to do this is to be less general and
to give the agent more predefined domain knowledge (implicit or explicit,
symbolic or non-symbolic, procedural or non-procedural )
Even if you say Human level AI is able to generate its own state spaces.
Then there is still the problem that the initial sensory state space is of
exponentially extend.

So in every useful AGI algorithm there must be certain strong limits as
explicit or implicit rules how to represent the world initially and/or how
to generalize and build a world representation from experiences.

This means, that the only way to avoid the problem of exponentially growth
is to hard code implicit or explicit assumptions of the world. 
And these initial assumptions are the most important limits of any useful
intelligence. They are much more important than the restrictions of time and
memory. Because with these limits it will probably not be true anymore that
you can learn everything and solve any solvable problem if you only get
enough resources. The algorithm in itself must has fixed inner limits to be
something useful in real world domains. These limits cannot be overcome with
experience. 

Even an algorithm that guesses new algorithms and replaces itself if it can
prove that it has found something more useful than itself has fixed
statements that it cannot overcome. More important: If you want to make such
an algorithm practically useful you have to give it predefined rules how to
reduce the huge space of possible algorithms. And again these rules are the
more important problem than the lack of memory and space. 

One could argue that the algorithm can change these rules by own experience.
But you can only prove that changing the rules algorithmically enhances the
performance if the agent makes good experiences with the new rules. You
cannot prove that certain algorithms would not improve your performance if
you don't know the algorithms at all. Remember: The rules do not define a
certain state or algorithm but they define a reduction of the whole
algorithm space the agent can consider while trying to become more powerful.
The rules within the algorithm contain knowledge of that what the learning
agent does not know itself and cannot learn.
Even if you can learn to learn. And learn to learn to learn. And ...
Every recursive procedure has to have a non-reducible base and it is clear,
that the overall performance and abilities depend crucially on that basic
non-reducible procedure. If this procedure is too general, the performance
slows exponentially with the space with which this 

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Pei Wang
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
---
3.3. General-purpose systems are not as good as special-purpose ones

Compared to the previous one, a weaker objection to AGI is to insist
that even though general-purpose systems can be built, they will not
work as well as special-purpose systems, in terms of performance,
efficiency, etc.

We actually agree with this judgment to a certain degree, though we do
not take it as a valid argument against the need to develop AGI.

For any given problem, a solution especially developed for it almost
always works better than a general solution that covers multiple types
of problem. However, we are not promoting AGI as a technique that will
replace all existing domain-specific AI
techniques. Instead, AGI is needed in situations where ready-made
solutions are not available, due to the dynamic nature of the
environment or the insufficiency of knowledge about the problem. In
these situations, what we expect from an AGI system are not optimal
solutions (which cannot be guaranteed), but flexibility, creatively,
and robustness, which are directly related to the generality of the
design.

In this sense, AGI is not proposed as a competing tool to any AI tool
developed before, by providing better results, but as a tool that can
be used when no other tool can, because the problem is unknown in
advance.
---

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Performance not an unimportant question. I assume that AGI has necessarily
has costs which grow exponentially with the number of states and actions so
that AGI will always be interesting only for toy domains.

My assumption is that human intelligence is not truly general intelligence
and therefore cannot hold as a proof of existence that
AGI is possible. Perhaps we see more intelligence than  there really is.
Perhaps the human intelligence is to some extend overestimated and an
illusion as the free will.

Why? In truly general domains every experience of an agent only can be used
for the single certain state and action when the experience was made. Every
time when your algorithm makes generalizations from known state-action pairs
to unknown state-action pairs then this is in fact usage of knowledge about
the underlying state-action space or it is just guessing and only a matter
of luck.

So truly general AGI algorithms must visit every state-action pair at least
once to learn what to do in what state.
Even in small real world domains the state spaces are so big that it would
take longer than the age of the universe to go through all states. 

For this reason true AGI is impossible and human intelligence must be narrow
to a certain degree. 



-Ursprüngliche Nachricht-
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 27. April 2008 13:50
An: agi@v2.listbox.com
Betreff: Re: [agi] How general can be and should be AGI?

On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
---
3.3. General-purpose systems are not as good as special-purpose ones

Compared to the previous one, a weaker objection to AGI is to insist
that even though general-purpose systems can be built, they will not
work as well as special-purpose systems, in terms of performance,
efficiency, etc.

We actually agree with this judgment to a certain degree, though we do
not take it as a valid argument against the need to develop AGI.

For any given problem, a solution especially developed for it almost
always works better than a general solution that covers multiple types
of problem. However, we are not promoting AGI as a technique that will
replace all existing domain-specific AI
techniques. Instead, AGI is needed in situations where ready-made
solutions are not available, due to the dynamic nature of the
environment or the insufficiency of knowledge about the problem. In
these situations, what we expect from an AGI system are not optimal
solutions (which cannot be guaranteed), but flexibility, creatively,
and robustness, which are directly related to the generality of the
design.

In this sense, AGI is not proposed as a competing tool to any AI tool
developed before, by providing better results, but as a tool that can
be used when no other tool can, because the problem is unknown in
advance.
---

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Pei Wang
If by truly general you mean absolutely general, I agree it is not
possible, but it is not what we are after. Again, I hope you find out
what people are doing under the name AGI, then make your argument
against it, rather than against the AGI in your imagination.

For example, I fully agree that visiting every state-action pair is
hopeless, but who in AGI is doing that or suggesting that? Just
because traditional AI is trapped by this methodology doesn't mean
there is no other possibility. Who said AI systems must do state-based
planning?

I'm not trying to convince you that AGI can be achieved --- that is
what people are exploring --- but that you should not assume the
traditional AI has tried all possibilities, and there cannot be
anything new.

Of course every intelligent system (human or computer) has its limit
(nobody denied that), but that limit is fundamentally different from
the limit of the current AI systems.

Pei

On Sun, Apr 27, 2008 at 8:46 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Performance not an unimportant question. I assume that AGI has necessarily
  has costs which grow exponentially with the number of states and actions so
  that AGI will always be interesting only for toy domains.

  My assumption is that human intelligence is not truly general intelligence
  and therefore cannot hold as a proof of existence that
  AGI is possible. Perhaps we see more intelligence than  there really is.
  Perhaps the human intelligence is to some extend overestimated and an
  illusion as the free will.

  Why? In truly general domains every experience of an agent only can be used
  for the single certain state and action when the experience was made. Every
  time when your algorithm makes generalizations from known state-action pairs
  to unknown state-action pairs then this is in fact usage of knowledge about
  the underlying state-action space or it is just guessing and only a matter
  of luck.

  So truly general AGI algorithms must visit every state-action pair at least
  once to learn what to do in what state.
  Even in small real world domains the state spaces are so big that it would
  take longer than the age of the universe to go through all states.

  For this reason true AGI is impossible and human intelligence must be narrow
  to a certain degree.




  -Ursprüngliche Nachricht-
  Von: Pei Wang [mailto:[EMAIL PROTECTED]
  Gesendet: Sonntag, 27. April 2008 13:50

 An: agi@v2.listbox.com
  Betreff: Re: [agi] How general can be and should be AGI?



 On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
  
What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of useful
intelligence.

  From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
  ---
  3.3. General-purpose systems are not as good as special-purpose ones

  Compared to the previous one, a weaker objection to AGI is to insist
  that even though general-purpose systems can be built, they will not
  work as well as special-purpose systems, in terms of performance,
  efficiency, etc.

  We actually agree with this judgment to a certain degree, though we do
  not take it as a valid argument against the need to develop AGI.

  For any given problem, a solution especially developed for it almost
  always works better than a general solution that covers multiple types
  of problem. However, we are not promoting AGI as a technique that will
  replace all existing domain-specific AI
  techniques. Instead, AGI is needed in situations where ready-made
  solutions are not available, due to the dynamic nature of the
  environment or the insufficiency of knowledge about the problem. In
  these situations, what we expect from an AGI system are not optimal
  solutions (which cannot be guaranteed), but flexibility, creatively,
  and robustness, which are directly related to the generality of the
  design.

  In this sense, AGI is not proposed as a competing tool to any AI tool
  developed before, by providing better results, but as a tool that can
  be used when no other tool can, because the problem is unknown in
  advance.
  ---

  Pei


 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;


 Powered by Listbox: http://www.listbox.com

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Ok. Maybe we mean different things under the name AGI.
I agree that traditional AI is just the beginning. And even if human
intelligence is no proof for that what I mean with AGI 
it is clear that human intelligence is far way more powerful than any AI
until now. But perhaps only for subtle reasons or reasons we will not know
in 100 years, who know.

I am convinced that super human intelligence is possible.
But mainly because we will use faster and more hardware than our brain.

Biology has created intelligence by trial and error and used billions of
years and trillions of animals for this.

The goal to do it better within 10 or 20 years from now and from scratch
seems to me way too ambitious. 

I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.
 

 

-Ursprüngliche Nachricht-
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 27. April 2008 15:03
An: agi@v2.listbox.com
Betreff: Re: [agi] How general can be and should be AGI?

If by truly general you mean absolutely general, I agree it is not
possible, but it is not what we are after. Again, I hope you find out
what people are doing under the name AGI, then make your argument
against it, rather than against the AGI in your imagination.

For example, I fully agree that visiting every state-action pair is
hopeless, but who in AGI is doing that or suggesting that? Just
because traditional AI is trapped by this methodology doesn't mean
there is no other possibility. Who said AI systems must do state-based
planning?

I'm not trying to convince you that AGI can be achieved --- that is
what people are exploring --- but that you should not assume the
traditional AI has tried all possibilities, and there cannot be
anything new.

Of course every intelligent system (human or computer) has its limit
(nobody denied that), but that limit is fundamentally different from
the limit of the current AI systems.

Pei

On Sun, Apr 27, 2008 at 8:46 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Performance not an unimportant question. I assume that AGI has necessarily
  has costs which grow exponentially with the number of states and actions
so
  that AGI will always be interesting only for toy domains.

  My assumption is that human intelligence is not truly general
intelligence
  and therefore cannot hold as a proof of existence that
  AGI is possible. Perhaps we see more intelligence than  there really is.
  Perhaps the human intelligence is to some extend overestimated and an
  illusion as the free will.

  Why? In truly general domains every experience of an agent only can be
used
  for the single certain state and action when the experience was made.
Every
  time when your algorithm makes generalizations from known state-action
pairs
  to unknown state-action pairs then this is in fact usage of knowledge
about
  the underlying state-action space or it is just guessing and only a
matter
  of luck.

  So truly general AGI algorithms must visit every state-action pair at
least
  once to learn what to do in what state.
  Even in small real world domains the state spaces are so big that it
would
  take longer than the age of the universe to go through all states.

  For this reason true AGI is impossible and human intelligence must be
narrow
  to a certain degree.




  -Ursprüngliche Nachricht-
  Von: Pei Wang [mailto:[EMAIL PROTECTED]
  Gesendet: Sonntag, 27. April 2008 13:50

 An: agi@v2.listbox.com
  Betreff: Re: [agi] How general can be and should be AGI?



 On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
  
What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of
useful
intelligence.

  From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
  ---
  3.3. General-purpose systems are not as good as special-purpose ones

  Compared to the previous one, a weaker objection to AGI is to insist
  that even though general-purpose systems can be built, they will not
  work as well as special-purpose systems, in terms of performance,
  efficiency, etc.

  We actually agree with this judgment to a certain degree, though we do
  not take it as a valid argument against the need to develop AGI.

  For any given problem, a solution especially developed for it almost
  always works better than a general solution that covers multiple types
  of problem. However, we are not promoting AGI as a technique that will
  replace all existing domain-specific AI
  techniques. Instead, AGI is needed in situations where ready-made
  solutions are not available, due to the dynamic nature of the
  environment or the insufficiency of knowledge about the problem. In
  these situations, what we

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

   Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54


   Yes, truly general AI is only possible in the case of infinite
   processing power, which is
   likely not physically realizable.
   How much generality can be achieved with how much
   Processing power, is not yet known -- math hasn't advanced that far yet.


  My point is not only that  'general intelligence without any limits' would
  need infinite resources of time and memory.
  This is trivial of course. What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

This is a consequence of the No Free Lunch theorem, essentially, isn't it?

http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

With infinite resources you use exhaustive search (like AIXI or the
Godel Machine) ...
with finite resources you can't afford it, so you need to use (explicitly or
implicitly) search that is guided by some inductive biases.

See Eric Baum's book What Is Thought? for much discussion on genetically
encoded inductive bias and its role in AI.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Mike Tintner

Matthias: a state  description could be:
...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.
..
I think studying the limitations of human intelligence or better to say the
predefined innate knowledge in our algorithms is essential to create that
what you call AGI. Because only with this knowledge you can avoid the
problem of huge state spaces.



You did something v. interesting, which is you started to ground the
discussion about general intelligence. These discussions are normally almost
totally ungrounded as to the SUBJECTS/OBJECTS  of intelligence.

Essentially, the underlying perspectives of discussions of GI in this field 
are computational and mathematical re the MEDIUM of intelligence. People 
basically think along the lines of: how much information can a computer 
hold, and how can it manipulate that information? But that - the equivalent 
would be something like: how much can a human brain hold and manipulate? - 
is not all there is to intelligence.


What is totally missing is a philosophical and semiotic perspective. A 
philosopher looks at things v. differently and asks essentially :  how much 
information can we get about a given subject (and the world generally)? A 
semioticist asks: how much and what kinds of information about any given 
subject (or the world generally) can different forms of representation give 
us?  (A verbal description, photo, movie, statue will all give us different 
forms of info and show different dimensions of a subject).



The AI-er asks how much information (about the world) can I and my machine 
handle? The philosopher: how much information about the world can we 
actually *get*? - How knowable is the world? ANd what do we have to do to 
get and present knowledge about the world?


If you are truly serious here, I suggest, you have to look at intelligence 
from both perspectives.


You took a kitchen as a possible subject to ground the discussion. Why not 
take something easier to
think about - to consider the difficulties of getting to know the world - a 
human being. Take one at random:


http://lifeboat.com/board/ben.goertzel.jpg

What does anyone, any society or any intelligence need to be a)
intelligent - to -  ultimately b) omniscient about this man?

How many disciplines of knowledge studying how many LEVELS OF THE SUBJECT -
levels of this man and his body, behaviour and relationships do we need to 
bring

in? Presumably we need somewhere between something and everything our
culture has to offer - every branch of science - psychology, social
psychology, biopsychology, social anthropology, behavioural economics,
cognitive science, neuroscience, down to cardiology, gastroenterology,
immunology ... down to biochemistry, molecular biology, genetics - focussing
on every part of his behaviour, and every part or subsystem of his body.
(Would you want a total systems science view which would attempt to
integrate all their views into one totally inegrated model of the man? Our
culture doesn't offer such a thing only a piecemeal view, but maybe you'd
like to attempt one?)

And those are just the generalists. Then we really ought to bring in 
somewhere between some

and all kinds of the arts  - they specialise in individual portraits.
Novelists, painters, sculptors, moviemakers, cartoonists etc. A Scorsese at 
least to do justice to his titanic struggles. They can all show us different 
of dimensions of this man.


Which brings us to HOW MANY KINDS OF REPRESENTATIONS OF A SUBJECT.. do we 
need to form a comprehensive representation of the man -
textual, references on Google, mathematicial, photographic, drawing, 
cartoon, movies,

statues, 3-d molecular models, holograms, tax returns, bank statements

how many scientific representations - mammogram, cardiogram, urine samples,
skin samples, biopsies, blood tests...

And then how much PERSONAL INTERACTION WITH THE SUBJECT is needed. Should 
you have interviewed, worked with him, partied with him, had sex with 
im?  -  And the SUBJECT'S RELATIONS ... should you know his family, friends 
etc.?


How extensive REPRESENTATIONS OF THE SUBJECT'S ENVIRONMENT... his home, 
office, car, beat-up chair, clothes etc...local neighbourhood, town, etc..


And what DEGREE OF EMBODIMENT should you, the knower, - or your computer - 
have? Because, obviously, you can only identify with any given subject to 
the extent that you have a similar/the same body. Hence philosophy's what's 
it like to be a bat? and how can you know *my* qualia? obsessions. Even 
God, according to some religions, had to become flesh to know humans.


Ultimately, I suggest, PERFECTION ...near godlike knowledge and intelligence 
would involve having a PERFECT REPLICA OF THE SUBJECT AND HIS ENVIRONMENT... 

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread William Pearson
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]:

   Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54


   Yes, truly general AI is only possible in the case of infinite
   processing power, which is
   likely not physically realizable.
   How much generality can be achieved with how much
   Processing power, is not yet known -- math hasn't advanced that far yet.


  My point is not only that  'general intelligence without any limits' would
  need infinite resources of time and memory.
  This is trivial of course. What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

I am probably the one on this list the closest to the position you
think AGI means.

I would agree. Any algorithms needs to be very specific to be useful.
However the *architecture*, of an AGI needs to be general (by this I
mean capable of instantiating any TM equivalent function, from input
and current state to output and current state).

So I think the the lowest level of the system space should be massive
as you argue against. However I would not make it a search space, as
such, with a fixed method searching it. On its own, it should be
passive, however it is be able to have active programs within it. As
these are programs on there own they can search the space of possible
programs. These programs could search sub spaces of the entire space,
or get information from the outside about which subspaces to search.
However, there is no limit to which subspaces they do actually search.

What makes my approach different to a bog standard computer system, is
that it would guide the searching of the programs within it, by acting
as reinforcement based ratchet. Those programs with the most
reinforcement, that act sensibly, will be able to protect and expand
the influence they have over the system. With the right internal
programs and environment, this will look as if the system has a goal
for what it is trying to become.

See this post for more details.
http://www.mail-archive.com/agi@v2.listbox.com/msg02892.html


  Every recursive procedure has to have a non-reducible base and it is clear,
  that the overall performance and abilities depend crucially on that basic
  non-reducible procedure. If this procedure is too general, the performance
  slows exponentially with the space with which this  basic procedure works.

There are recursive procedures that abandon the base, see for example
booting a machine.

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Mike Tintner wrote
 What is totally missing is a philosophical and semiotic perspective. A 
 philosopher looks at things v. differently and asks essentially : how much

 information can we get about a given subject (and the world generally)? A 
 semioticist asks: how much and what kinds of information about any given 
 subject (or the world generally) can different forms of representation
give 
 us? (A verbal description, photo, movie, statue will all give us different

 forms of info and show different dimensions of a subject).

 The AI-er asks how much information (about the world) can I and my machine

 handle? The philosopher: how much information about the world can we 
 actually *get*? - How knowable is the world? ANd what do we have to do to 
 get and present knowledge about the world?

I think, the typical AIer asks for clever algorithms and source code.
Certain AIer see their problem as a part of control theory. Other see 
Their problem as the problem of emulating the biological brain. 

But very few ask about the regularities of our world. May be, because they
think that this should be done by the 
intelligent software we want to build. But I am convinced that goal directed
engineering to develop AGI is only possible
if we model the most basic regularities of our universe in the AGI
algorithms.

Humans make experiences in single states and can generalize the new
knowledge to huge domains. Most AIer ask only: How do we generalize? But the
answer depends on the question: Why is it even possible that we may
generalize? And this question is rarely considered.

This was the point I wanted to say, that our universe does not only help
life to evolve but it seems to be very friendly 
for intelligence because our universe is full of regularities at all levels
from microcosm to macrocosm.
And useful intelligence is only possible in a world with regularities
because only with regularities you can avoid to search through trillions of
states.

For example: I can talk tomorrow with a person who I have never seen before.
I can do this just from my social experiences of the past with other people.
Why is this possible?
Another example: I see a mosquito for the first time in my life. I hear the
sound. I see how it lands on my right arm.
And I see it flying away. Finally my skin becomes red at that place and I
feel the little pain at my. 
Why can I conclude the mosquito is the reason for the pain? Why can I know
that the same could happen, if the mosquito would land on my shoulder? Why
do I know that the room is not important for the phenomenon of the pain?

I think, an AIer must ask such questions.
And he has to see it from both perspectives: On the one hand the software
engineer who designs the intelligent algorithm.
On the other hand the scientist who thinks about nature.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] How general can be and should be AGI?

2008-04-26 Thread Dr. Matthias Heger
How general should be AGI?

When I heard the term AGI for the first time, I had to think about the 
general problem solver from 1959
(http://en.wikipedia.org/wiki/General_Problem_Solver).

It solved a few simple problems but was overstrained with real world
problems.

Second, there is Gödel's theorem which shows that there can not be a complex
machine that 
can can generate both complete and uncontradictory knowledge.

 There is another theorem that shows that 100% AGI is impossible:
Alan Turing proved in 1936 that a general algorithm to solve the halting
problem for 
all possible program-input pairs cannot exist.

On the other hand, we think that AGI is possible because we believe that we
ourselves ARE AGI systems.
But as the theorems show without any doubt: Perfect AGI is impossible.

Of course I am convinced that there can be systems, which are far more
intelligent than humans.
But even these systems will have their limits.

Perhaps further research of our own limits can help us to construct more
intelligent machines.
Perhaps the mechanisms behind human intelligence are so powerful because
they are not designed 
to be a real general problem solver.

Intelligence has a lot to do with recognizing regularities in patterns of
signals which are obtained from the environment.
We can see humans to be very powerful in this ability. But are we real
powerful to recogonize GENERAL regularities???
A simple example shows, that this is by far not the case:

We can recognize very detailed information from the environment 
by our eyes. So we can think, that our optical sense is a general pattern
recognizer. But imagine you would record a soundwave of a speaking man and
visualize the wave on a screen. It would be impossible for you to recognize
the words or even the voice of the person. I am sure that even if you would
practice a child for years with these patterns it would not learn to
understand the voice and sentences. Perhaps a slow and errorful recognition
of some words. But by far not so powerful as our acousthesia.

This shows that our optical sense is not able to recognize general patterns
of our environment. And by the way: The child would not gain conscious
phenomenons like qualia when analyzing the sound waves.
Our optical intelligence pattern recognizer is NOT AGI. It is narrow AI in
this sense.



Assumption 1:

### Most powerful intelligence and most general intelligence are not
possible at the same time. ###

A system which has most general intelligence will suffer from huge problems
of complexity.
So if we design an architecture which can evolve to very very general
intelligence it will be very very probably need
too many time and memory so that it can be only of theoretical interest.
So one main problem of AGI is to design it general but not too general.
And one of the main questions will be which features and domain knowledge
should be hard coded and how.

If we define intelligence to be the ability to solve complex problems in
complex environments we should ask what 
are adequate limits of complexity. 

Life can only evolve in environments with very narrow conditions.
I think this is similar with intelligence:

Assumption 2:

### Intelligence can work and evolve only in environments with limited
conditions. ###

Nature is very very complex because there are so many particles which
interacts with each other. 
It is important for intelligence, that knowledge of every single particle
and fundamental laws of physics is not necessary to make predictions about
the environment
The change of day and night is an example for a regularity which can be
predicted with high accuracy with very low knowledge of details 
of the environment. In a world with low structure or rapid change of
structure and regularities intelligence is for sure not possible or very
difficult at least.
Our world is a hierarchical world with encapsulated levels. You can see
regularities on high levels without the knowledge of the details below this
level.
This is certainly a key feature of our world without that intelligent life
could not evolve at all.

So I see the following interesting questions for AGI:
What restrictions are adequate or necessary for a practical AGI system to
obtain a good compromise of general and powerful intelligence?
What are the detailed conditions of the environment, that are necessary for
intelligent systems.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Pei Wang
From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf page 5:
---
In the current context, when we say that the human mind or an AGI system is
general purpose, we do not mean that it can solve all kinds of
problems in all kinds
of domains, but that it has the potential to solve any problem in any
domain, given
proper experience. Non-AGI systems lack such a potential.
---
That paper also addressed the issue of general potential vs. domain
knowledge.

I agree with you that an intelligence solving all kinds of problems
in all kinds
of domains is impossible, though I don't think the conclusions of
Gödel and Turing
are the major reason (or even that relevant) here. My arguments are in
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

Pei

On Sat, Apr 26, 2008 at 6:35 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 How general should be AGI?

  When I heard the term AGI for the first time, I had to think about the
  general problem solver from 1959
  (http://en.wikipedia.org/wiki/General_Problem_Solver).

  It solved a few simple problems but was overstrained with real world
  problems.

  Second, there is Gödel's theorem which shows that there can not be a complex
  machine that
  can can generate both complete and uncontradictory knowledge.

   There is another theorem that shows that 100% AGI is impossible:
  Alan Turing proved in 1936 that a general algorithm to solve the halting
  problem for
  all possible program-input pairs cannot exist.

  On the other hand, we think that AGI is possible because we believe that we
  ourselves ARE AGI systems.
  But as the theorems show without any doubt: Perfect AGI is impossible.

  Of course I am convinced that there can be systems, which are far more
  intelligent than humans.
  But even these systems will have their limits.

  Perhaps further research of our own limits can help us to construct more
  intelligent machines.
  Perhaps the mechanisms behind human intelligence are so powerful because
  they are not designed
  to be a real general problem solver.

  Intelligence has a lot to do with recognizing regularities in patterns of
  signals which are obtained from the environment.
  We can see humans to be very powerful in this ability. But are we real
  powerful to recogonize GENERAL regularities???
  A simple example shows, that this is by far not the case:

  We can recognize very detailed information from the environment
  by our eyes. So we can think, that our optical sense is a general pattern
  recognizer. But imagine you would record a soundwave of a speaking man and
  visualize the wave on a screen. It would be impossible for you to recognize
  the words or even the voice of the person. I am sure that even if you would
  practice a child for years with these patterns it would not learn to
  understand the voice and sentences. Perhaps a slow and errorful recognition
  of some words. But by far not so powerful as our acousthesia.

  This shows that our optical sense is not able to recognize general patterns
  of our environment. And by the way: The child would not gain conscious
  phenomenons like qualia when analyzing the sound waves.
  Our optical intelligence pattern recognizer is NOT AGI. It is narrow AI in
  this sense.



  Assumption 1:

  ### Most powerful intelligence and most general intelligence are not
  possible at the same time. ###

  A system which has most general intelligence will suffer from huge problems
  of complexity.
  So if we design an architecture which can evolve to very very general
  intelligence it will be very very probably need
  too many time and memory so that it can be only of theoretical interest.
  So one main problem of AGI is to design it general but not too general.
  And one of the main questions will be which features and domain knowledge
  should be hard coded and how.

  If we define intelligence to be the ability to solve complex problems in
  complex environments we should ask what
  are adequate limits of complexity.

  Life can only evolve in environments with very narrow conditions.
  I think this is similar with intelligence:

  Assumption 2:

  ### Intelligence can work and evolve only in environments with limited
  conditions. ###

  Nature is very very complex because there are so many particles which
  interacts with each other.
  It is important for intelligence, that knowledge of every single particle
  and fundamental laws of physics is not necessary to make predictions about
  the environment
  The change of day and night is an example for a regularity which can be
  predicted with high accuracy with very low knowledge of details
  of the environment. In a world with low structure or rapid change of
  structure and regularities intelligence is for sure not possible or very
  difficult at least.
  Our world is a hierarchical world with encapsulated levels. You can see
  regularities on high levels without the knowledge of the details below this
  level.
  This is certainly a key 

AW: [agi] How general can be and should be AGI?

2008-04-26 Thread Dr. Matthias Heger
In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
possible in this world
if you apply it not to a hypothetical machine or human being but to the
whole universe which can be assumed to be a closed system.

The axioms are the laws of physics.
Then, everything what happens in this world is the application of these
axioms.
And every step of this application is without any fault if we suppose that
our universe does not change the
laws of physics.

So, with Goedel, the whole universe cannot generate a set of statements
about itself which are both complete and without any contradictions.
And therefore a machine which is part of the universe cannot have this
ability too.

But my main point was not to think about the question whether perfect AGI is
possible.
I mainly wanted to point out, that we ourselves have strong limits and are
in a sense narrow AI systems instead of AGI systems.
The example with the visualized sound wave shows that we use very
specialized pattern algorithms instead of general ones.
And of course biology use it for reasons of performance. 

Perhaps it is possible for a human to see the patterns in a sound wave if he
has enough time. 
But this would be thousand fold slower than the specialized pattern
recognizer for sound signals.

This shows that human intelligence is not build from general pattern
algorithms  in the brain but from algorithms that are specialized for
patterns of a specialized environment or at least are tuned for specialized
patterns. From this raises the question whether it makes sense to think
about pattern algorithms that works with most patterns in this world. This
question is mainly a question of performance.

And my point was the assumption, that we can buy general intelligence only
for hopelessly many costs of time and memory.

Another example:
Imagine a child who makes the first experience with pain when touching a hot
hotplate in a kitchen.
The child will learn not to touch the hotplate. But this task is very hard
if you want to solve it with a general algorithm with low domain-knowledge. 

If you feel pain in your hand, what was the reason? 
If you think in the AGI way it could be 
the open window in the kitchen, 
the sandwich you have eaten an hour ago,
the fly on the desk,
your blue shirt

...

and after trillions of other  possible reasons
the collision of the hand with the plate which is obvious for us but
not obvious for any algorithm without domain knowledge.

Well you can find the reason with more experience. But how many tries do you
need with AGI? Trillions! Because with the AGI approach you can by
definition not rule out anything. At least you have to use AGI learning
algorithms with massive predefined rules of generalization. So even if we
find clever AGI algorithms, the power will mainly depend on tuning it to
work in special real world problems.


-Ursprüngliche Nachricht-
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 26. April 2008 14:16
An: agi@v2.listbox.com
Betreff: Re: [agi] How general can be and should be AGI?

From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf page 5:
---
In the current context, when we say that the human mind or an AGI system is
general purpose, we do not mean that it can solve all kinds of
problems in all kinds
of domains, but that it has the potential to solve any problem in any
domain, given
proper experience. Non-AGI systems lack such a potential.
---
That paper also addressed the issue of general potential vs. domain
knowledge.

I agree with you that an intelligence solving all kinds of problems
in all kinds
of domains is impossible, though I don't think the conclusions of
Gödel and Turing
are the major reason (or even that relevant) here. My arguments are in
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

Pei



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Vladimir Nesov
On Sat, Apr 26, 2008 at 2:35 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 How general should be AGI?


If all you aim for is a system that has unlimited potential, then a
Universal Turing Machine is as far as you need to go, and as far as
you can go. A more important goal to be build a system that can learn
to do relevant things in reasonable time.


  Our world is a hierarchical world with encapsulated levels. You can see
  regularities on high levels without the knowledge of the details below this
  level.
  This is certainly a key feature of our world without that intelligent life
  could not evolve at all.


I think you captured the bias that AGI system needs to have in this
quote quite well. It's probably a good summary of what a system must
be able to do to be deemed intelligent in the sense people are
intelligent: to be able to simulate the environment on different
levels of detail, focusing attention of different parts of the
structure, moving the attention to more general levels or more
detailed ones, to different parts of the structure or to the future
and the past of its development. Starting from this basis, it should
be possible to develop specialized subsystems that solve specific
problems not amenable to such description.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mark Waser
Tell me: what are the algorithms that will force you to process this image 
in an inevitable way (and what is that way?):


http://honolulu.hawaii.edu/distance/sci122/Programs/p3/Rorschach.gif

(Oh - and a, linas, Bob, Mark, et al - can we agree that there is no way 
for maths to process that image, period?)


No.  I strongly disagree with your assertion.  What you believe you are 
processing (w)holistically can easily be broken down into a series of parts 
with scaling, translation, transposition, and other standard operators.


What a rorschach epitomises is the human ability to see just about 
anything in anything (and you can't get much more general than that) -  a 
solar system in an atom, a hard, metal computer in a grey, gooey brain, or 
a penis in a rocket and the odd, million other items ...


Again, just all parts processed with normal visual operators.

Imagination. That's what's at the heart of adaptivity and general 
intelligence - and that's what AGI totally - and systemically -  lacks.


I agree -- but vision is *NOT* the source and not required.  You are merely 
faked out by the fact that it is the primary sense of human beings.


Face it.  Your arguments are *not* compelling.  You seem to believe that 
whining the same thing over and over again like a petulant child is going to 
change minds.  Trust me, unless you come up with a logical reason, more and 
more people are just going to stop listening to you.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] How general can be and should be AGI?

2008-04-26 Thread Dr. Matthias Heger

On Samstag, 26. April 2008 17:00 Pei Wang [mailto:[EMAIL PROTECTED]
wrote

 to many people, including me, this is exactly what AGI is
 after: a baby with all kinds of potentials, not an adult that can do
 everything.

I understand AGI in the same way but even the term all kind of potentials 
seems to me 
a wish which is not possible and which also no human baby really has.

Let's take the halting
problem(http://en.wikipedia.org/wiki/Halting_problem).
It is fact that no Turing machine
(http://en.wikipedia.org/wiki/Turing_machine) can solve it for all
program-input pairs.

I think, your argumentation is that an AGI system (e.g. human being) can
solve any halting problem because it can change
over time by making more and more experiences. But the even the experience
making human being can be regarded as a turing machine with a fixed and
finite algorithm.
All knowledge it can obtain is already implicit there - in the universe. The
universe can be modeled as part of the infinite tape of the turing machine.
The computer or the brain is the finite table of the turing machine.  Every
making of experience of AGI can be modeled as reading data from the turing
tape.

 Even if you think of a human being who uses more and more pieces of paper
to expand his knowledge and behavior or a computer who grow into space, then
simply model the whole universe to be the finite table of the turing machine
and the tape at the same time. 

So even if you point out that AGI should mean to have ALL potentials
instead of having ALL abilities, AGI is impossible and can only be
approximated. I therefore  prefer the term human level AGI

My point is that the theoretical thoughts are strong evidence for
fundamental limitations of human intelligence and that perhaps (=assumption)
these and further limitations are to a certain degree even necessary to
solve the problem of the need for huge time, memory, learning steps. 

Our universe is not only well build for life to evolve. It has also allowed
human level AGI to evolve from very narrow forms of intelligence. I like the
goal to create AGI but I fear, we want to make it too general and can
therefore not overcome problems of complexity. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
  possible in this world
  if you apply it not to a hypothetical machine or human being but to the
  whole universe which can be assumed to be a closed system.

Please consult the works of Marcus Hutter (Universal AI) and Juergen Schmidhuber
(Godel Machine).   These thoughts are not new.

Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.   How much generality can be
achieved with how much
processing power, is not yet known -- math hasn't advanced that far yet.

Humans are not totally general, yet are much more general than any of
the AI systems
yet built

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Vladimir Nesov
On Sat, Apr 26, 2008 at 9:39 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  I think, your argumentation is that an AGI system (e.g. human being) can
  solve any halting problem because it can change
  over time by making more and more experiences. But the even the experience
  making human being can be regarded as a turing machine with a fixed and
  finite algorithm.

It's pretty bald to expect that many people could've been missing such
a flaw in your straw man for years, isn't it?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mike Tintner

MT:  http://honolulu.hawaii.edu/distance/sci122/Programs/p3/Rorschach.gif
(Oh - and a, linas, Bob, Mark, et al - can we agree that there is no way
for maths to process that image, period?)

Mark:No.  I strongly disagree with your assertion.  What you believe you are
processing (w)holistically can easily be broken down into a series of parts
with scaling, translation, transposition, and other standard operators.

Mark,

Let's see if you can actually put forward an idea, as opposed to shouting.

You've missed the point. What a human does in looking at a rorschach is to 
see - i.e. compare it with - a recognizable object or creature - a bat, 
for instance, or an ant, or a gargoyle.


So what you must tell me is how your or any geometrical system of analysis 
is going to be able to take a rorschach and come up similarly with a 
recognizable object or creature. Bear in mind, your system will be given  no 
initial clues as to what objects or creatures are suitable as potential 
comparisons. It can by all means have a large set of visual images in 
memory, as we do. But you must tell me how your system will connect the 
rorschach with any of those images, such as a bat,  - by *geometrical* 
means.


Of course, a geometrical system can be used to *analyse* an individual 
rorschach into some set of geometric forms - but only by hand, 
individually, on a one-off basis - and imperfectly. There is no geometrical 
*formula* for analysing rorschachs, because they can take an unlimited and 
non-formulaic variety of shapes, just as there is no geometrical formula for 
analysing the diverse shapes of living creatures, like bats, ants and human 
beings.  So there is, by extension,  no geometrical, or indeed any other 
formulaic means to *compare* a rorschach with any object or  creature.


Nor is there any geometrical means to compare *any* irregular objects - a 
slug and, say,  a human being walking along, a purse, say,  and a vagina, a 
rock and a chair, a moustache and a walrus.


If you think there is, then you obviously have solved some of the most 
important, unsolved problems of AGI, such as analogy, metaphor and 
creativity.


You've also turned geometers into designers and artists

You also may have gone some way to solving the problem of lookup - the way 
the brain can find a similar image in just a few steps, where computers that 
can only search blindly take millions of steps, and may still come up with 
nothing.


So I await your geometric solution to this problem - (a mere statement of 
principle will do) - with great interest. Well, actually no. Your answer is 
broadly predictable - you 1) won't have any idea here  2) will have nothing 
to say to the point and  3) be, as usual, all bark and no bite - all insults 
and no ideas.








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread BillK
On Sat, Apr 26, 2008 at 8:09 PM, Mike Tintner wrote:
  So what you must tell me is how your or any geometrical system of analysis
 is going to be able to take a rorschach and come up similarly with a
 recognizable object or creature. Bear in mind, your system will be given  no
 initial clues as to what objects or creatures are suitable as potential
 comparisons. It can by all means have a large set of visual images in
 memory, as we do. But you must tell me how your system will connect the
 rorschach with any of those images, such as a bat,  - by *geometrical*
 means.

snip


This is called Content-based image retrieval (CBIR), also known as
query by image content (QBIC) and content-based visual information
retrieval (CBVIR) is the application of computer vision to the image
retrieval problem, that is, the problem of searching for digital
images in large databases.
http://en.wikipedia.org/wiki/CBIR

This is a hot area of computer research, with many test systems. (see article).

Nothing to do with AGI, of course.

Every post from Mike seems to be yet another different way of saying
'You're all wrong!'
Are you sure you want to be on this list, Mike?

BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] How general can be and should be AGI?

2008-04-26 Thread Dr. Matthias Heger
Don't understand your point fully. Perhaps my English is too bad.
I have had the impression, that pei wang thought that gödels theorem and the
halting problem do not apply for human beings because they are open systems.


Perhaps he is right but not because of the open system issue but because it
is not clear whether the universe can really be modeled as a turing machine.
I only wanted to clarify this point and did not claim to have found
something new.


-Ursprüngliche Nachricht-
Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 26. April 2008 20:55
An: agi@v2.listbox.com
Betreff: Re: [agi] How general can be and should be AGI?

On Sat, Apr 26, 2008 at 9:39 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  I think, your argumentation is that an AGI system (e.g. human being) can
  solve any halting problem because it can change
  over time by making more and more experiences. But the even the
experience
  making human being can be regarded as a turing machine with a fixed and
  finite algorithm.

It's pretty bald to expect that many people could've been missing such
a flaw in your straw man for years, isn't it?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Vladimir Nesov
On Sat, Apr 26, 2008 at 11:42 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Don't understand your point fully. Perhaps my English is too bad.
  I have had the impression, that pei wang thought that gödels theorem and the
  halting problem do not apply for human beings because they are open systems.


  Perhaps he is right but not because of the open system issue but because it
  is not clear whether the universe can really be modeled as a turing machine.
  I only wanted to clarify this point and did not claim to have found
  something new.


My complaint was merely about what I heard as assumption that many
people here believe that AGI must be able to learn to start solving
halting problems. Your below assertion does have many problems (think
of the universe as a finite state machine, or as a fixed Turing
machine running forward, or in other way limited in information):

On Sat, Apr 26, 2008 at 6:03 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  So, with Goedel, the whole universe cannot generate a set of statements
  about itself which are both complete and without any contradictions.
  And therefore a machine which is part of the universe cannot have this
  ability too.


-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread William Pearson
2008/4/26 Dr. Matthias Heger [EMAIL PROTECTED]:
 How general should be AGI?

My answer, as *potentially* general as possible. In a similar fashion
that a UTM is as potentially as general as possible, but with more
purpose.

There are plenty of problems you can define that don't need the
halting problem to be impossible to solve, e.g. remember a number with
more digits that the potential states of the universe.

Some other comments. Have you looked at the literature on neuro plasticity?

This wired article is a good introduction.

http://www.wired.com/wired/archive/15.04/esp_pr.html

Although there are more academic papers out there, a google can find them.

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mike Tintner
BillK: MT:  So what you must tell me is how your or any geometrical system 
of analysis

is going to be able to take a rorschach and come up similarly with a
recognizable object or creature. Bear in mind, your system will be given 
no

initial clues as to what objects or creatures are suitable as potential
comparisons. It can by all means have a large set of visual images in
memory, as we do. But you must tell me how your system will connect the
rorschach with any of those images, such as a bat,  - by *geometrical*
means.


snip


This is called Content-based image retrieval (CBIR), also known as
query by image content (QBIC) and content-based visual information
retrieval (CBVIR) is the application of computer vision to the image
retrieval problem, that is, the problem of searching for digital
images in large databases.
http://en.wikipedia.org/wiki/CBIR

This is a hot area of computer research, with many test systems. (see 
article).


Nothing to do with AGI, of course.


BillK,

CBIR  isn't the same as drawing analogies and forming metaphors - or 
comparing rorschachs to bats. You're saying that analogy and metaphor are 
not important to AGI?  I disagree. I think they're central.


CBIR is about retrieving the *same/ v.similar*  kinds of objects or shapes 
(rather than radically different ones that  neverthless have some 
not-obvious similarity, e.g. moustaches and walruses).   And no, the ability 
to analyse images in terms of more or less the same visual elements, like 
say, the colour red, or the same texture or shape, will NOT solve the 
problem of analogy/metaphor or comparing rorschachs and bats.


May I suggest, BTW, that you really *look* at the problem - put the two 
images side by side?


http://www.desordre.net/textes/bibliotheque/rorschach.jpg
http://members.tripod.com/~susano/images/bat2.gif

And CBIR isn't really working, is it? (Not, for a second,  that it's without 
its uses). It can't find pictures of dogs can it?


You could call that a problem of conceptualisation. Which, if you go into it 
enough, is deeply related to the problem of comparing rorschachs and bats - 
because 'dogs' or 'labradors'  or pretty well any species come in very 
diverse, not-so-similar shapes.


Thanks for the ref.

P.S. Re my negativity, I'm saying broadly that AI and AGI lack certain 
crucial faculties - you think they have all the faculties they need to 
succeed?




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mark Waser
You've missed the point. What a human does in looking at a rorschach is to 
see - i.e. compare it with - a recognizable object or creature - a bat, 
for instance, or an ant, or a gargoyle.


I didn't miss the point.  The standard visual operators are doing exactly 
the same thing.


So what you must tell me is how your or any geometrical system of analysis 
is going to be able to take a rorschach and come up similarly with a 
recognizable object or creature. Bear in mind, your system will be given 
no initial clues as to what objects or creatures are suitable as potential 
comparisons. It can by all means have a large set of visual images in 
memory, as we do. But you must tell me how your system will connect the 
rorschach with any of those images, such as a bat,  - by *geometrical* 
means.


Mike, do you know what vector graphics are?  Do you understand how comparing 
vector graphics can lead to exactly such an identification?  Why are you 
asking this question as if this something new or unique?


Of course, a geometrical system can be used to *analyse* an individual 
rorschach into some set of geometric forms - but only by hand, 
individually, on a one-off basis - and imperfectly. There is no 
geometrical *formula* for analysing rorschachs, because they can take an 
unlimited and non-formulaic variety of shapes, just as there is no 
geometrical formula for analysing the diverse shapes of living creatures, 
like bats, ants and human beings.  So there is, by extension,  no 
geometrical, or indeed any other formulaic means to *compare* a rorschach 
with any object or  creature.


There are all sorts of ad hoc algorithms that can replace what you say must 
be done by hand.  The second half of your paragraph is just blatantly 
incorrect.


Nor is there any geometrical means to compare *any* irregular objects - a 
slug and, say,  a human being walking along, a purse, say,  and a vagina, 
a rock and a chair, a moustache and a walrus.


Wrong.

If you think there is, then you obviously have solved some of the most 
important, unsolved problems of AGI, such as analogy, metaphor and 
creativity.


How so?  Show me *exactly* how they correlate and how if I have solved the 
one, the other is trivial.



You've also turned geometers into designers and artists


How so?  Since when does decomposition equal good composition?  Personally, 
I am able to analyze art and say what is good and bad.  I am not, however, a 
particularly good artist.  Your argument is just plain wrong.  AGAIN.


So I await your geometric solution to this problem - (a mere statement of 
principle will do) - with great interest. Well, actually no. Your answer 
is broadly predictable - you 1) won't have any idea here  2) will have 
nothing to say to the point and  3) be, as usual, all bark and no bite - 
all insults and no ideas.


Nice ad hominem.  Asshole. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


WARNING -- LET'S KEEP THE LIST CIVIL PLEASE ... was Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
Ummm... just a little note of warning from the list owner.

Tintner wrote:
  So I await your geometric solution to this problem - (a mere statement of
 principle will do) - with great interest. Well, actually no. Your answer is
 broadly predictable - you 1) won't have any idea here  2) will have nothing
 to say to the point and  3) be, as usual, all bark and no bite - all insults
 and no ideas.

Waser wrote:
  Nice ad hominem.  Asshole.

Uh, no.

Mark, you've been a really valuable contributor to this list for a long period
of time.

But, this sort of name-calling is just not apropos on this list.
Don't do it anymore.

Thanks
Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] How general can be and should be AGI?

2008-04-26 Thread Derek Zahn
I assume you are referring to Mike Tintner.
 
As I described a while ago, I *plonk*ed him myself a long time ago, most mail 
programs have the ability to do that. and it's a good idea to figure out how to 
do it with your own email program.
 
He does have the ability to point at other thinkers and their papers, such as 
Lakoff and Barsalou, who have extremely interesting things to say... but his 
own contributions (beyond citing) to any converation are infuriating., I think 
it's about time to give up on Mike until he learns to  behave again.  And 
you shouldn't use sarcasm -- he just doesn't get it.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com