This link of scientific news of today shows that scientists and
mathematicians obviously have common abilities:
http://www.sciencedaily.com/releases/2008/10/081027121515.htm
http://www.sciencedaily.com/releases/2008/10/081027121515.htm
Von: Dr. Matthias Heger [mailto:[EMAIL PROTECTED
/reingoldcharness_perception-in
-chess_2005_underwood.pdf
-Matthias
-Ursprüngliche Nachricht-
Von: Charles Hixson [mailto:[EMAIL PROTECTED]
Gesendet: Samstag, 25. Oktober 2008 22:25
An: agi@v2.listbox.com
Betreff: Re: AW: [agi] If your AGI can't learn to play chess it is no AGI
Dr. Matthias Heger wrote
Learning is gaining knowledge. This ability does not imply the ability to
*use* the knowledge.
You can learn easily the mathematical axioms of numbers. Within these axioms
there is everything to know about the numbers.
But a lot of people who had this knowledge could not prove Fermat's
No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.
Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which
The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata.
If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.
I
This does not imply that people usually do not use visual patterns to solve
chess.
It only implies that visual patterns are not necessary.
Since I do not know any good blind chess player I would suspect that visual
patterns are better for chess
then those patterns which are used by blind people.
: Freitag, 24. Oktober 2008 11:03
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI
On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can
I do not reply to the details of your posting because I think
a) You mystify AGI
b) You evaluate the ability to discover regularities completely wrong
c) The details may be interesting but are not relevant for the subject
of this thread
Just imagine you have build an AGI
Mark Waser wrote
Must it be able to *discover* regularities or must it be able to be taught
and subsequently effectively use regularities? I would argue the latter.
(Can we get a show of hands of those who believe the former? I think that
it's a small minority but . . . )
If AGI means the
Mark Waser wrote:
Can we get a listing of what you believe these limitations are and whether
or not you believe that they apply to humans?
I believe that humans are constrained by *all* the limits of finite automata
yet are general intelligences so I'm not sure of your point.
It is also my
Mark Waser wrote
Must it be able to *discover* regularities or must it be able to be taught
and subsequently effectively use regularities? I would argue the latter.
(Can we get a show of hands of those who believe the former? I think that
it's a small minority but . . . )
If AGI means the
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI
On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I do not think that it is essential for the quality of my chess who had
taught me to play chess.
I could have learned
. From there we can argue whether the problem-solving abilities
necessary for NLU are sufficient to allow problem-solving to occur in any
domain (as I have argued).
Terren
--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: AW: [agi
23, 2008 at 5:38 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I am sure that everyone who learns chess by playing against chess
computers
and is able to learn good chess playing (which is not sure as also
I am very impressed about the performance of humans in chess compared to
computer chess.
The computer steps through millions(!) of positions per second. And even if
the best chess players say they only evaluate max 3 positions per second I
am sure that this cannot be true because there are so
tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).
-dave
On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
It seems to me that many people think that embodiment is very important for
AGI.
For instance some
gallantly. Specialization is for insects.
-dave
On Wed, Oct 22, 2008 at 7:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.
My
I do not claim that AGI might not have bias which is equivalent to genes of
your example. The point is that AGI is the union set of all AI sets. If I
have a certain domain d and a problem p and I know that p can be solved
using nothing else than d, then AGI must be able to solve problem p in d
If you give the system the rules of chess then it has all which is necessary
to know to become a good chess player.
It may play against itself or against a common chess program or against
humans.
- Matthias
Trent Waddington [mailto:[EMAIL PROTECTED] wrote
No-one can learn chess from playing
I do not regard chess as important as a drosophila for AI. It would just be
a first milestone where we can make a fast proof of concept for an AGI
approach. The faster we can sort out bad AGI approaches the sooner we will
obtain a successful one.
Chess has the advantage to be an easy
I agree that chess is far from sufficient for AGI. But I have mentioned this
already at the beginning of this thread.
The important role of chess for AGI could be to rule out bad AGI approaches
as fast as possible.
Before you go to more complex domains you should consider chess as a first
Very useful link. Thanks.
-Matthias
Von: Ben Goertzel [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 22. Oktober 2008 15:40
An: agi@v2.listbox.com
Betreff: [agi] A huge amount of math now in standard first-order predicate
logic format!
I had not noticed this before, though it was
Ben wrote:
The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in
These narrow, closed, deterministic domains are *subsets* of what AGI is
You make the implicit assumption that a natural language understanding
system will pass the turing test. Can you prove this?
Furthermore, it is just an assumption that the ability to have and to apply
the rules are really necessary to pass the turing test.
For these two reasons, you still
It depends what to play chess poorly mean. No one would expect that a
general AGI architecture can outperform special chess programs with the same
computational resources. I think you could convince a lot of people if you
demonstrate that your approach which is obviously completely different
: Defining AGI)
--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
For instance, I doubt that anyone can prove that
any system which understands natural language is
necessarily able to solve
the simple equation x *3 = y for a given y.
It can be solved with statistics. Take y = 12
There is another point which indicates that the ability to understand
language or to learn language does not imply *general* intelligence.
You can often observe in school that linguistic talents are poor in
mathematics and vice versa.
- Matthias
---
strongly suspect that any software system **with a vaguely
human-mind-like architecture** that is capable of learning human language,
would also be able to learn basic mathematics
ben
On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Sorry, but this was no proof that a natural
natural language understanding without understanding (which
equals scientist ;-).
Understanding does not equal scientist.
The claim that natural language understanding needs understanding is
trivial. This wasn't your initial hypothesis.
- Original Message -
From: Dr. Matthias Heger
-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 21. Oktober 2008 05:05
An: agi@v2.listbox.com
Betreff: [agi] Language learning (was Re: Defining AGI)
--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
For instance, I doubt
Mark Waser answered to
I don't say that anything is easy.
:
Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining
Here's my simple proof: algebra, or any other formal language for that
matter, is expressible in natural language, if inefficiently.
Words like quantity, sum, multiple, equals, and so on, are capable of
conveying the same meaning that the sentence x*3 = y conveys. The rules
for
It seems to me that many people think that embodiment is very important for
AGI.
For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.
But this would have a rather strange consequence:
If you give your AGI a difficult
Any argument of the kind you should better first read xxx + yyy +. is
very weak. It is a pseudo killer argument against everything with no content
at all.
If xxx , yyy . contains really relevant information for the discussion
then it should be possible to quote the essential part with few
Terren wrote
Language understanding requires a sophisticated conceptual framework
complete with causal models, because, whatever meaning means, it must be
captured somehow in an AI's internal models of the world.
Conceptual framework is not well defined. Therefore I can't agree or
disagree.
I think in the past there were always difficult technological problems
leading to a conceptual controversy how to solve these problems. Time has
always shown which approaches were successful and which were not successful.
The fact, that we have so many philosophical discussions show that we still
If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a
A conceptual framework starts with knowledge representation. Thus a symbol S
refers to a persistent pattern P which is, in some way or another, a reflection
of the agent's environment and/or a composition of other symbols. Symbols are
related to each other in various ways. These relations
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
What the computer makes with the data it receives depends on the information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.
Language understanding would be useful to teach the AGI with existing
knowledge already
and understanding
On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
The process of outwardly expressing meaning may be fundamental to any
social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning
For the discussion of the subject the details of the pattern representation
are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of
If there are some details of the internal structure of patterns visible then
this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.
Since in many communicating technical systems there are so much details
which are
by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!
-dave
On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
The process of outwardly expressing meaning may be fundamental to any social
intelligence
Terren wrote:
Isn't the *learning* of language the entire point? If you don't have an
answer for how an AI learns language, you haven't solved anything. The
understanding of language only seems simple from the point of view of a
fluent speaker. Fluency however should not be confused with a
Mark Waser wrote
What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently
assuming that the matching process is crisp, unambiguous, and irreversible
(and ask Stephen Reed how well that works for TexAI).
I do not assume this. Why should
We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.
-Matthias
Mark Waser wrote
Details that don't need to be transferred are those which are either known
by or unnecessary to the
The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.
If the language must be learned then things
Marc Walser wrote:
*Any* human who can understand language beyond a certain point (say, that of
a slightly sub-average human IQ) can easily be taught to be a good scientist
if they are willing to play along. Science is a rote process that can be
learned and executed by anyone -- as long as
I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has a
stronger relation to the pattern for angry than it is the case for the
pattern of tree.
You don't have to manipulate any patterns and can do the translation.
-
Absolutely. We are not aware of most of our assumptions that are based in
our common heritage, culture, and embodiment. But an external observer
could easily notice them and tease out an awful lot of information about us
by doing so.
You do not understand what I mean.
There will be lot of
I think embodied linguistic experience could be *useful* for an AGI to do
mathematics. The reason for this is that creativity comes from usage of huge
knowledge and experiences in different domains.
But on the other hand I don't think embodied experience is necessary. It
could be even have
If you don't like mirror neurons, forget them. They are not necessary for my
argument.
Trent wrote
Oh you just hit my other annoyance.
How does that work?
Mirror neurons
IT TELLS US NOTHING.
Trent
---
agi
Archives:
I do not agree that body mapping is necessary for general intelligence. But
this would be one of the easiest problems today.
In the area of mapping the body onto another (artificial) body, computers
are already very smart:
See the video on this page:
http://www.image-metrics.com/
-Matthias
of approach even more powerful...
-- Ben G
On Sat, Oct 18, 2008 at 3:45 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
I think embodied linguistic experience could be **useful** for an AGI
to
do mathematics. The reason for this is that creativity comes from usage
of
huge knowledge
I think here you can see that automated mapping between different faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.
http://de.youtube.com/watch?v=nice6NYb_WA
-Matthias
Mike Tintner wrote
If you can build a system which understands human language you are still far
away from AGI.
Being able to understand the language of someone else does no way imply to
have the same intelligence. I think there were many people who understood
the language of Einstein but they were not able to create
I think it does involve being confronted with two different faces or
objects randomly chosen/positioned and finding/recognizing the similarities
between them.
If you have watched the video carefully then you have heard that they have
spoken from automated algorithms which do the matching.
On
After the first positioning there is no point to point matching at all.
The main intelligence comes from the knowledge base of hundreds of 3d
scanned faces.
This is a huge vector space. And it is no easy task to match a given picture
of a face with a vector(=face) within the vector space.
The
But even understanding an alien language would not necessarily imply to
understand how this intelligence work ;-) Furthermore, understanding the
language of an intelligent species is not necessary and is also not
sufficient to have the same intelligence. In fact language is only a
protocol to
On Fri, Oct 17, 2008 at 12:32 PM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
In my opinion language itself is no real domain for intelligence at all.
Language is just a communication protocol. You have patterns of a certain
domain in your brain you have to translate your internal pattern
In my opinion, the domain of software development is far too ambitious for
the first AGI.
Software development is not a closed domain. The AGI will need at least
knowledge about the domain of the problems for which the AGI shall write a
program.
The English interface is nice but today it is just
In theorem proving computers are weak too compared to performance of good
mathematicians.
The domain of mathematics is well understood. But we do not understand how
we manage to solve problems within this domain.
In my opinion language itself is no real domain for intelligence at all.
Language
If we do not agree how to define AGI, intelligence, creativity etc. we
cannot discuss the question how to build it.
And even if we all agree in these questions there is the other question for
which domain it is useful to build the first AGI.
AGi is the ability to solve different problems in
Text compression would be AGI-complete but I think it is still too big.
The problem is the source of knowledge. If you restrict to mathematical
expressions then the amount of data necessary to teach the AGI is probably
much smaller. In fact AGI could teach itself using a current theorem prover.
My intention is not to define intelligence. I choose mathematics just as a
test domain for first AGI algorithms.
The reasons:
1. The domain is well understood.
2. The domain has regularities. Therefore a high intelligent algorithm has a
chance to outperform less intelligent algorithms
3. The
Mike Tintner wrote,
You don't seem to understand creative/emergent problems (and I find this
certainly not universal, but v. common here).
If your chess-playing AGI is to tackle a creative/emergent problem (at a
fairly minor level) re chess - it would have to be something like: find a
new
The quantum level biases would be more general and more correct as it is the
case
with quantum physics and classical physics.
The reasons why humans do not have modern physics biases for space and time:
There is no relevant advantage to survive when you have such biases
and probably the costs of
Good points. I would like to add a further point:
Human language is a sequence of words which is used to transfer patterns of
one brain into another brain.
When we have an AGI which understands and speaks language, then for the
first time there would be an exchange of patterns between an
The problem of the emergent behavior already arises within a chess program
which
visits millions of chess positions within a second.
I think the problem of the emergent behavior equals the fine tuning problem
which I have already mentioned:
We will know, that the main architecture of our AGI
Brad Paulson wrote
More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding. Looking for new approaches to
this problem, many researches (including prominent members of
Brad Pausen wrote
The question I'm raising in this thread is more one of priorities and
allocation of scarce resources. Engineers and scientists comprise only
about 1% of the world's population. Is human-level NLU worth the resources
it has consumed, and will continue to consume, in the
From my points 1. and 2. it should be clear that I was not talking about a
distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
Stan wrote:
Seems hard to imagine information processing without identity.
Intelligence is about invoking methods. Methods are created because
they are expected to create a result. The result is the value - the
value that allows them to be selected from many possible choices.
Identity can
1. We feel ourselves not exactly at a single point in space. Instead, we
identify ourselves with our body which consist of several parts and which
are already at different points in space. Your eye is not at the same place
as your hand.
I think this is a proof that a distributed AGI will not need
Chess is a typical example of a very hard problem where human level
intelligence
could be outperformed by typical AI programs when they have enough computing
power available.
But a chess program is no AGI program because it is restricted to a very
narrow well defined problem and environment.
Derek Zahn wrote:
For example, using Goertzel's definition for intelligence: complex goals in
complex environments -- the goals of non-human animals do not seem complex
in the same way that building an airplane is complex...
I think we underestimate the intelligence of many non-human animals
.
--
--- On Sat, 6/14/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Which animal has the smallest level of
intelligence which still would be sufficient for a robot to be an
AGI-robot?
Homo Sapiens, according to Turing's definition of intelligence.
-- Matt Mahoney, [EMAIL PROTECTED
Mike Tintner [mailto:[EMAIL PROTECTED] wrote
And that's the same mistake people are making with AGI generally - no one
has a model of what general intelligence involves, or of the kind of
problems it must solve - what it actually DOES - and everyone has left that
till later, and is instead
Steve Richfield wrote
In short, most people on this
list appear to be interested only in HOW to straight-line program an AGI
(with the implicit assumption that we operate anything at all like we
appear
to operate), but not in WHAT to program, and most especially not in any
apparent
John G. Rose [mailto:[EMAIL PROTECTED] wrote
For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically
Von: Russell Wallace [mailto:[EMAIL PROTECTED]
On Sun, May 4, 2008 at 1:55 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
If we imagine a brain scanner with perfect resolution of space and time
then
we get every information of the brain including the phenomenon of qualia.
But we
Richard Loosemore [mailto:[EMAIL PROTECTED] wrote
That was a personal insult.
You should be ashamed of yourself, if you cannot discuss the issues
without filling your comments with ad hominem abuse.
I did think about replying to the specific insults you set out above,
but in the end I have
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote
Dr. Matthias Heger [EMAIL PROTECTED] wrote:
The interesting question is how we learn the basic nouns like ball
or cat, i.e. abstract concepts for objects of our environment.
How do we create the basic patterns?
A child sees a ball, hears
- Matt Mahoney [mailto:[EMAIL PROTECTED]
No. Qualia is not needed for learning because there is no physical
difference between an agent with qualia and one without. Chalmers
questioned its existence, see http://consc.net/papers/qualia.html
It is disturbing to think that qualia does not
Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
I don't currently see something mysterious in qualia: it is one of
those cases where a debate about phenomenon is much more complicated
than phenomenon itself. Just as 'free will' is just a way
self-watching control system operates, considering
Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
I agree that just having precise data isn't enough: in other words,
you won't automatically be able to generalize from such data to
different initial conditions and answer the queries about phenomenon
in those cases. It is a basic statement
Von: Mike Tintner [mailto:[EMAIL PROTECTED] wrote
Well, clearly you do need emotions, continually evaluating the
worthwhileness of your current activity and its goals/ risks and costs - as
set against the other goals of your psychoeconomy.
And while your and my emotions may have
Mike Tintner [mailto:[EMAIL PROTECTED] wrote
You only need emotions when you're dealing with problems that are
problematic, ill-structured, and involving potentially infinite reasoning.
(Chess qualifies as that for a human being, not for a program).
When dealing with such problems, you
Matt Mahoney [mailto:[EMAIL PROTECTED]
Repeat the trial many times. Out of the thousands of perceptual
features present when the child hears ball, the relevant features
will reinforce and the others will cancel out.
The concept of ball that a child learns is far too complex to
manually code
Matt Mahoney [mailto:[EMAIL PROTECTED]
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
You will agree that you have unconscious perception without qualia
and conscious perception with qualia. Since you are a physical system
there must
be a physical based explanation for the difference
- Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
:
So you explain qualia by a certain destination of perception in the
brain? I
do not think that this can be all. But it will be as I have said: Some
day
we can describe the whole physiological process
of qualia but we will never be
Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
If you can use a brain scanning device that says you experience X
when you experience X, why is it significantly different from
observing stone falling to earth with a device that observes stone
falling to earth
Because only you can know
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] wrote
This is a good example where a neural language model can solve the
problem. The approximate model is
phonemes - words - semantics - grammar
where the phoneme set activates both the apples and applies neurons
at the word level. This is
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote
Object oriented programming is good for organizing software but I don't
think for organizing human knowledge. It is a very rough
approximation. We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote
eat(Food f)
eat(Food f, ListSideDish l)
eat (Food f, ListTool l)
eat (Food f, ListPeople l)
...
This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar. We do
not
:[EMAIL PROTECTED]
Gesendet: Samstag, 3. Mai 2008 01:27
An: agi@v2.listbox.com
Betreff: [agi] Re: AW: Language learning
--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
So the medium layers of AGI will be the most difficult layers.
I think if you try to integrate a structured or O-O knowledge base
Charles D Hixson [mailto:[EMAIL PROTECTED]
The two AGI modes that I believe people use are 1) mathematics and 2)
experiment. Note that both operate in restricted domains, but within
those domains they *are* general. (E.g., mathematics cannot generate
it's own axioms, postulates, and
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much generality can be achieved with how much
Processing power, is not yet known -- math
1 - 100 of 107 matches
Mail list logo