AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Dr. Matthias Heger
This link of scientific news of today shows that scientists and mathematicians obviously have common abilities: http://www.sciencedaily.com/releases/2008/10/081027121515.htm http://www.sciencedaily.com/releases/2008/10/081027121515.htm Von: Dr. Matthias Heger [mailto:[EMAIL PROTECTED

AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Dr. Matthias Heger
/reingoldcharness_perception-in -chess_2005_underwood.pdf -Matthias -Ursprüngliche Nachricht- Von: Charles Hixson [mailto:[EMAIL PROTECTED] Gesendet: Samstag, 25. Oktober 2008 22:25 An: agi@v2.listbox.com Betreff: Re: AW: [agi] If your AGI can't learn to play chess it is no AGI Dr. Matthias Heger wrote

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Dr. Matthias Heger
Learning is gaining knowledge. This ability does not imply the ability to *use* the knowledge. You can learn easily the mathematical axioms of numbers. Within these axioms there is everything to know about the numbers. But a lot of people who had this knowledge could not prove Fermat's

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
No Mike. AGI must be able to discover regularities of all kind in all domains. If you can find a single domain where your AGI fails, it is no AGI. Chess is broad and narrow at the same time. It is easy programmable and testable and humans can solve problems of this domain using abilities which

AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
The limitations of Godelian completeness/incompleteness are a subset of the much stronger limitations of finite automata. If you want to build a spaceship to go to mars it is of no practical relevance to think whether it is theoretically possible to move through wormholes in the universe. I

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
This does not imply that people usually do not use visual patterns to solve chess. It only implies that visual patterns are not necessary. Since I do not know any good blind chess player I would suspect that visual patterns are better for chess then those patterns which are used by blind people.

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
: Freitag, 24. Oktober 2008 11:03 An: agi@v2.listbox.com Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: No Mike. AGI must be able to discover regularities of all kind in all domains. If you can

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
I do not reply to the details of your posting because I think a) You mystify AGI b) You evaluate the ability to discover regularities completely wrong c) The details may be interesting but are not relevant for the subject of this thread Just imagine you have build an AGI

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote Must it be able to *discover* regularities or must it be able to be taught and subsequently effectively use regularities? I would argue the latter. (Can we get a show of hands of those who believe the former? I think that it's a small minority but . . . ) If AGI means the

AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote: Can we get a listing of what you believe these limitations are and whether or not you believe that they apply to humans? I believe that humans are constrained by *all* the limits of finite automata yet are general intelligences so I'm not sure of your point. It is also my

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote Must it be able to *discover* regularities or must it be able to be taught and subsequently effectively use regularities? I would argue the latter. (Can we get a show of hands of those who believe the former? I think that it's a small minority but . . . ) If AGI means the

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
An: agi@v2.listbox.com Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: I do not think that it is essential for the quality of my chess who had taught me to play chess. I could have learned

AW: AW: [agi] Understanding and Problem Solving

2008-10-23 Thread Dr. Matthias Heger
. From there we can argue whether the problem-solving abilities necessary for NLU are sufficient to allow problem-solving to occur in any domain (as I have argued). Terren --- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: From: Dr. Matthias Heger [EMAIL PROTECTED] Subject: AW: [agi

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
23, 2008 at 5:38 PM, Trent Waddington [EMAIL PROTECTED] wrote: On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: I am sure that everyone who learns chess by playing against chess computers and is able to learn good chess playing (which is not sure as also

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
I am very impressed about the performance of humans in chess compared to computer chess. The computer steps through millions(!) of positions per second. And even if the best chess players say they only evaluate max 3 positions per second I am sure that this cannot be true because there are so

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
tractable via measurable incremental improvements (even though it is admittedly still at a *very* early stage). -dave On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: It seems to me that many people think that embodiment is very important for AGI. For instance some

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
gallantly. Specialization is for insects. -dave On Wed, Oct 22, 2008 at 7:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: I see no argument in your text against my main argumentation, that an AGI should be able to learn chess from playing chess alone. This I call straw man replies. My

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not claim that AGI might not have bias which is equivalent to genes of your example. The point is that AGI is the union set of all AI sets. If I have a certain domain d and a problem p and I know that p can be solved using nothing else than d, then AGI must be able to solve problem p in d

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
If you give the system the rules of chess then it has all which is necessary to know to become a good chess player. It may play against itself or against a common chess program or against humans. - Matthias Trent Waddington [mailto:[EMAIL PROTECTED] wrote No-one can learn chess from playing

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not regard chess as important as a drosophila for AI. It would just be a first milestone where we can make a fast proof of concept for an AGI approach. The faster we can sort out bad AGI approaches the sooner we will obtain a successful one. Chess has the advantage to be an easy

AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I agree that chess is far from sufficient for AGI. But I have mentioned this already at the beginning of this thread. The important role of chess for AGI could be to rule out bad AGI approaches as fast as possible. Before you go to more complex domains you should consider chess as a first

AW: [agi] A huge amount of math now in standard first-order predicate logic format!

2008-10-22 Thread Dr. Matthias Heger
Very useful link. Thanks. -Matthias Von: Ben Goertzel [mailto:[EMAIL PROTECTED] Gesendet: Mittwoch, 22. Oktober 2008 15:40 An: agi@v2.listbox.com Betreff: [agi] A huge amount of math now in standard first-order predicate logic format! I had not noticed this before, though it was

AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
Ben wrote: The ability to cope with narrow, closed, deterministic environments in an isolated way is VERY DIFFERENT from the ability to cope with a more open-ended, indeterminate environment like the one humans live in These narrow, closed, deterministic domains are *subsets* of what AGI is

AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Dr. Matthias Heger
You make the implicit assumption that a natural language understanding system will pass the turing test. Can you prove this? Furthermore, it is just an assumption that the ability to have and to apply the rules are really necessary to pass the turing test. For these two reasons, you still

AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
It depends what to play chess poorly mean. No one would expect that a general AGI architecture can outperform special chess programs with the same computational resources. I think you could convince a lot of people if you demonstrate that your approach which is obviously completely different

AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
: Defining AGI) --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: For instance, I doubt that anyone can prove that any system which understands natural language is necessarily able to solve the simple equation x *3 = y for a given y. It can be solved with statistics. Take y = 12

AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
There is another point which indicates that the ability to understand language or to learn language does not imply *general* intelligence. You can often observe in school that linguistic talents are poor in mathematics and vice versa. - Matthias ---

AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
strongly suspect that any software system **with a vaguely human-mind-like architecture** that is capable of learning human language, would also be able to learn basic mathematics ben On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: Sorry, but this was no proof that a natural

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
natural language understanding without understanding (which equals scientist ;-). Understanding does not equal scientist. The claim that natural language understanding needs understanding is trivial. This wasn't your initial hypothesis. - Original Message - From: Dr. Matthias Heger

AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
-Ursprüngliche Nachricht- Von: Matt Mahoney [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 21. Oktober 2008 05:05 An: agi@v2.listbox.com Betreff: [agi] Language learning (was Re: Defining AGI) --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: For instance, I doubt

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Mark Waser answered to I don't say that anything is easy. : Direct quote cut and paste from *your* e-mail . . . . -- From: Dr. Matthias Heger To: agi@v2.listbox.com Sent: Sunday, October 19, 2008 2:19 PM Subject: AW: AW: [agi] Re: Defining

AW: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Here's my simple proof: algebra, or any other formal language for that matter, is expressible in natural language, if inefficiently. Words like quantity, sum, multiple, equals, and so on, are capable of conveying the same meaning that the sentence x*3 = y conveys. The rules for

[agi] If your AGI can't learn to play chess it is no AGI

2008-10-21 Thread Dr. Matthias Heger
It seems to me that many people think that embodiment is very important for AGI. For instance some people seem to believe that you can't be a good mathematician if you haven't made some embodied experience. But this would have a rather strange consequence: If you give your AGI a difficult

AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Any argument of the kind you should better first read xxx + yyy +. is very weak. It is a pseudo killer argument against everything with no content at all. If xxx , yyy . contains really relevant information for the discussion then it should be possible to quote the essential part with few

AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Terren wrote Language understanding requires a sophisticated conceptual framework complete with causal models, because, whatever meaning means, it must be captured somehow in an AI's internal models of the world. Conceptual framework is not well defined. Therefore I can't agree or disagree.

AW: [agi] Re: Value of philosophy

2008-10-20 Thread Dr. Matthias Heger
I think in the past there were always difficult technological problems leading to a conceptual controversy how to solve these problems. Time has always shown which approaches were successful and which were not successful. The fact, that we have so many philosophical discussions show that we still

AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
If MW would be scientific then he would not have asked Ben to prove that MWs hypothesis is wrong. The person who has to prove something is the person who creates the hypothesis. And MW has given not a tiny argument for his hypothesis that a natural language understanding system can easily be a

AW: AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
A conceptual framework starts with knowledge representation. Thus a symbol S refers to a persistent pattern P which is, in some way or another, a reflection of the agent's environment and/or a composition of other symbols. Symbols are related to each other in various ways. These relations

AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of outwardly expressing meaning may be fundamental to any social intelligence but the process itself needs not much intelligence. Every email program can receive meaning, store meaning and it can express it outwardly in order to send it to another computer. It even can do it without

AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
What the computer makes with the data it receives depends on the information of the transferred data, its internal algorithms and its internal data. This is the same with humans and natural language. Language understanding would be useful to teach the AGI with existing knowledge already

AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
and understanding On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: The process of outwardly expressing meaning may be fundamental to any social intelligence but the process itself needs not much intelligence. Every email program can receive meaning, store meaning

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
For the discussion of the subject the details of the pattern representation are not important at all. It is sufficient if you agree that a spoken sentence represent a certain set of patterns which are translated into the sentence. The receiving agent retranslates the sentence and matches the

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
The process of changing the internal model does not belong to language understanding. Language understanding ends if the matching process is finished. Language understanding can be strictly separated conceptually from creation and manipulation of patterns as you can separate the process of

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
If there are some details of the internal structure of patterns visible then this is no proof at all that there are not also details of the structure which are completely hidden from the linguistic point of view. Since in many communicating technical systems there are so much details which are

AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
by the authors of future posts on the topic of language and AGI. If the AGI list were a forum, Matthias's post should be pinned! -dave On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: The process of outwardly expressing meaning may be fundamental to any social intelligence

AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Terren wrote: Isn't the *learning* of language the entire point? If you don't have an answer for how an AI learns language, you haven't solved anything. The understanding of language only seems simple from the point of view of a fluent speaker. Fluency however should not be confused with a

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
Mark Waser wrote What if the matching process is not finished? This is overly simplistic for several reasons since you're apparently assuming that the matching process is crisp, unambiguous, and irreversible (and ask Stephen Reed how well that works for TexAI). I do not assume this. Why should

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
We can assume that the speaking human itself is not aware about every details of its patterns. At least these details would be probably hidden from communication. -Matthias Mark Waser wrote Details that don't need to be transferred are those which are either known by or unnecessary to the

AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
The language model does not need interaction with the environment when the language model is already complete which is possible for formal languages but nearly impossible for natural language. That is the reason why formal language need much less cost. If the language must be learned then things

AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Marc Walser wrote: *Any* human who can understand language beyond a certain point (say, that of a slightly sub-average human IQ) can easily be taught to be a good scientist if they are willing to play along. Science is a rote process that can be learned and executed by anyone -- as long as

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
I have given the example with the dog next to a tree. There is an ambiguity. It can be resolved because the pattern for dog has a stronger relation to the pattern for angry than it is the case for the pattern of tree. You don't have to manipulate any patterns and can do the translation. -

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
Absolutely. We are not aware of most of our assumptions that are based in our common heritage, culture, and embodiment. But an external observer could easily notice them and tease out an awful lot of information about us by doing so. You do not understand what I mean. There will be lot of

AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
I think embodied linguistic experience could be *useful* for an AGI to do mathematics. The reason for this is that creativity comes from usage of huge knowledge and experiences in different domains. But on the other hand I don't think embodied experience is necessary. It could be even have

AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you don't like mirror neurons, forget them. They are not necessary for my argument. Trent wrote Oh you just hit my other annoyance. How does that work? Mirror neurons IT TELLS US NOTHING. Trent --- agi Archives:

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I do not agree that body mapping is necessary for general intelligence. But this would be one of the easiest problems today. In the area of mapping the body onto another (artificial) body, computers are already very smart: See the video on this page: http://www.image-metrics.com/ -Matthias

AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
of approach even more powerful... -- Ben G On Sat, Oct 18, 2008 at 3:45 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: I think embodied linguistic experience could be **useful** for an AGI to do mathematics. The reason for this is that creativity comes from usage of huge knowledge

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think here you can see that automated mapping between different faces is possible and the computer can smoothly morph between them. I think, the performance is much better than the imagination of humans can be. http://de.youtube.com/watch?v=nice6NYb_WA -Matthias Mike Tintner wrote

AW: AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you can build a system which understands human language you are still far away from AGI. Being able to understand the language of someone else does no way imply to have the same intelligence. I think there were many people who understood the language of Einstein but they were not able to create

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think it does involve being confronted with two different faces or objects randomly chosen/positioned and finding/recognizing the similarities between them. If you have watched the video carefully then you have heard that they have spoken from automated algorithms which do the matching. On

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
After the first positioning there is no point to point matching at all. The main intelligence comes from the knowledge base of hundreds of 3d scanned faces. This is a huge vector space. And it is no easy task to match a given picture of a face with a vector(=face) within the vector space. The

AW: [agi] NEWS: Scientist develops programme to understand alien languages

2008-10-17 Thread Dr. Matthias Heger
But even understanding an alien language would not necessarily imply to understand how this intelligence work ;-) Furthermore, understanding the language of an intelligent species is not necessary and is also not sufficient to have the same intelligence. In fact language is only a protocol to

AW: [agi] Re: Defining AGI

2008-10-17 Thread Dr. Matthias Heger
On Fri, Oct 17, 2008 at 12:32 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: In my opinion language itself is no real domain for intelligence at all. Language is just a communication protocol. You have patterns of a certain domain in your brain you have to translate your internal pattern

AW: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-16 Thread Dr. Matthias Heger
In my opinion, the domain of software development is far too ambitious for the first AGI. Software development is not a closed domain. The AGI will need at least knowledge about the domain of the problems for which the AGI shall write a program. The English interface is nice but today it is just

AW: [agi] Re: Defining AGI

2008-10-16 Thread Dr. Matthias Heger
In theorem proving computers are weak too compared to performance of good mathematicians. The domain of mathematics is well understood. But we do not understand how we manage to solve problems within this domain. In my opinion language itself is no real domain for intelligence at all. Language

AW: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Dr. Matthias Heger
If we do not agree how to define AGI, intelligence, creativity etc. we cannot discuss the question how to build it. And even if we all agree in these questions there is the other question for which domain it is useful to build the first AGI. AGi is the ability to solve different problems in

AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Dr. Matthias Heger
Text compression would be AGI-complete but I think it is still too big. The problem is the source of knowledge. If you restrict to mathematical expressions then the amount of data necessary to teach the AGI is probably much smaller. In fact AGI could teach itself using a current theorem prover.

AW: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Dr. Matthias Heger
My intention is not to define intelligence. I choose mathematics just as a test domain for first AGI algorithms. The reasons: 1. The domain is well understood. 2. The domain has regularities. Therefore a high intelligent algorithm has a chance to outperform less intelligent algorithms 3. The

AW: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-07 Thread Dr. Matthias Heger
Mike Tintner wrote, You don't seem to understand creative/emergent problems (and I find this certainly not universal, but v. common here). If your chess-playing AGI is to tackle a creative/emergent problem (at a fairly minor level) re chess - it would have to be something like: find a new

AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-07 Thread Dr. Matthias Heger
The quantum level biases would be more general and more correct as it is the case with quantum physics and classical physics. The reasons why humans do not have modern physics biases for space and time: There is no relevant advantage to survive when you have such biases and probably the costs of

AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Dr. Matthias Heger
Good points. I would like to add a further point: Human language is a sequence of words which is used to transfer patterns of one brain into another brain. When we have an AGI which understands and speaks language, then for the first time there would be an exchange of patterns between an

AW: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Dr. Matthias Heger
The problem of the emergent behavior already arises within a chess program which visits millions of chess positions within a second. I think the problem of the emergent behavior equals the fine tuning problem which I have already mentioned: We will know, that the main architecture of our AGI

AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Dr. Matthias Heger
Brad Paulson wrote More generally, as long as AGI designers and developers insist on simulating human intelligence, they will have to deal with the AI-complete problem of natural language understanding. Looking for new approaches to this problem, many researches (including prominent members of

[agi] It is more important how AGI works than what it can do.

2008-10-05 Thread Dr. Matthias Heger
Brad Pausen wrote The question I'm raising in this thread is more one of priorities and allocation of scarce resources. Engineers and scientists comprise only about 1% of the world's population. Is human-level NLU worth the resources it has consumed, and will continue to consume, in the

AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
From my points 1. and 2. it should be clear that I was not talking about a distributed AGI which is in NO place. The AGI you mean consists of several parts which are in different places. But this is already the case with the human body. The only difference is, that the parts of the distributed AGI

AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
Stan wrote: Seems hard to imagine information processing without identity. Intelligence is about invoking methods. Methods are created because they are expected to create a result. The result is the value - the value that allows them to be selected from many possible choices. Identity can

AW: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Dr. Matthias Heger
1. We feel ourselves not exactly at a single point in space. Instead, we identify ourselves with our body which consist of several parts and which are already at different points in space. Your eye is not at the same place as your hand. I think this is a proof that a distributed AGI will not need

[agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
Chess is a typical example of a very hard problem where human level intelligence could be outperformed by typical AI programs when they have enough computing power available. But a chess program is no AGI program because it is restricted to a very narrow well defined problem and environment.

AW: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
Derek Zahn wrote: For example, using Goertzel's definition for intelligence: complex goals in complex environments -- the goals of non-human animals do not seem complex in the same way that building an airplane is complex... I think we underestimate the intelligence of many non-human animals

AW: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
. -- --- On Sat, 6/14/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: Which animal has the smallest level of intelligence which still would be sufficient for a robot to be an AGI-robot? Homo Sapiens, according to Turing's definition of intelligence. -- Matt Mahoney, [EMAIL PROTECTED

AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger
Mike Tintner [mailto:[EMAIL PROTECTED] wrote And that's the same mistake people are making with AGI generally - no one has a model of what general intelligence involves, or of the kind of problems it must solve - what it actually DOES - and everyone has left that till later, and is instead

AW: [agi] Pearls Before Swine...

2008-06-08 Thread Dr. Matthias Heger
Steve Richfield wrote In short, most people on this list appear to be interested only in HOW to straight-line program an AGI (with the implicit assumption that we operate anything at all like we appear to operate), but not in WHAT to program, and most especially not in any apparent

AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger
John G. Rose [mailto:[EMAIL PROTECTED] wrote For general intelligence some components and sub-components of consciousness need to be there and some don't. And some could be replaced with a human operator as in an augmentation-like system. Also some components could be designed drastically

AW: [agi] Language learning, basic patterns, qualia

2008-05-05 Thread Dr. Matthias Heger
Von: Russell Wallace [mailto:[EMAIL PROTECTED] On Sun, May 4, 2008 at 1:55 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: If we imagine a brain scanner with perfect resolution of space and time then we get every information of the brain including the phenomenon of qualia. But we

AW: [agi] AGI-08 videos

2008-05-05 Thread Dr. Matthias Heger
Richard Loosemore [mailto:[EMAIL PROTECTED] wrote That was a personal insult. You should be ashamed of yourself, if you cannot discuss the issues without filling your comments with ad hominem abuse. I did think about replying to the specific insults you set out above, but in the end I have

AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote Dr. Matthias Heger [EMAIL PROTECTED] wrote: The interesting question is how we learn the basic nouns like ball or cat, i.e. abstract concepts for objects of our environment. How do we create the basic patterns? A child sees a ball, hears

AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
- Matt Mahoney [mailto:[EMAIL PROTECTED] No. Qualia is not needed for learning because there is no physical difference between an agent with qualia and one without. Chalmers questioned its existence, see http://consc.net/papers/qualia.html It is disturbing to think that qualia does not

AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote I don't currently see something mysterious in qualia: it is one of those cases where a debate about phenomenon is much more complicated than phenomenon itself. Just as 'free will' is just a way self-watching control system operates, considering

AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote I agree that just having precise data isn't enough: in other words, you won't automatically be able to generalize from such data to different initial conditions and answer the queries about phenomenon in those cases. It is a basic statement

AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Von: Mike Tintner [mailto:[EMAIL PROTECTED] wrote Well, clearly you do need emotions, continually evaluating the worthwhileness of your current activity and its goals/ risks and costs - as set against the other goals of your psychoeconomy. And while your and my emotions may have

[agi] goals and emotion of AGI

2008-05-04 Thread Dr. Matthias Heger
Mike Tintner [mailto:[EMAIL PROTECTED] wrote You only need emotions when you're dealing with problems that are problematic, ill-structured, and involving potentially infinite reasoning. (Chess qualifies as that for a human being, not for a program). When dealing with such problems, you

AW: AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] Repeat the trial many times. Out of the thousands of perceptual features present when the child hears ball, the relevant features will reinforce and the others will cancel out. The concept of ball that a child learns is far too complex to manually code

AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] --- Dr. Matthias Heger [EMAIL PROTECTED] wrote: You will agree that you have unconscious perception without qualia and conscious perception with qualia. Since you are a physical system there must be a physical based explanation for the difference

AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger
- Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote : So you explain qualia by a certain destination of perception in the brain? I do not think that this can be all. But it will be as I have said: Some day we can describe the whole physiological process of qualia but we will never be

AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger
Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote If you can use a brain scanning device that says you experience X when you experience X, why is it significantly different from observing stone falling to earth with a device that observes stone falling to earth Because only you can know

[agi] Language learning, basic patterns, qualia

2008-05-03 Thread Dr. Matthias Heger
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] wrote This is a good example where a neural language model can solve the problem. The approximate model is phonemes - words - semantics - grammar where the phoneme set activates both the apples and applies neurons at the word level. This is

AW: AW: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote Object oriented programming is good for organizing software but I don't think for organizing human knowledge. It is a very rough approximation. We have used O-O for designing ontologies and expert systems (IS-A links, etc), but this approach does

AW: Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] wrote eat(Food f) eat(Food f, ListSideDish l) eat (Food f, ListTool l) eat (Food f, ListPeople l) ... This type of knowledge representation has been tried and it leads to a morass of rules and no intuition on how children learn grammar. We do not

AW: [agi] Re: AW: Language learning

2008-05-02 Thread Dr. Matthias Heger
:[EMAIL PROTECTED] Gesendet: Samstag, 3. Mai 2008 01:27 An: agi@v2.listbox.com Betreff: [agi] Re: AW: Language learning --- Dr. Matthias Heger [EMAIL PROTECTED] wrote: So the medium layers of AGI will be the most difficult layers. I think if you try to integrate a structured or O-O knowledge base

AW: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Dr. Matthias Heger
Charles D Hixson [mailto:[EMAIL PROTECTED] The two AGI modes that I believe people use are 1) mathematics and 2) experiment. Note that both operate in restricted domains, but within those domains they *are* general. (E.g., mathematics cannot generate it's own axioms, postulates, and

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54 Yes, truly general AI is only possible in the case of infinite processing power, which is likely not physically realizable. How much generality can be achieved with how much Processing power, is not yet known -- math

  1   2   >