AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
I think embodied linguistic experience could be *useful* for an AGI to do mathematics. The reason for this is that creativity comes from usage of huge knowledge and experiences in different domains. But on the other hand I don't think embodied experience is necessary. It could be even have

Re: [agi] Re: Defining AGI

2008-10-18 Thread Trent Waddington
On Sat, Oct 18, 2008 at 2:39 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: a certain degree (mirror neurons). Oh you just hit my other annoyance. How does that work? Mirror neurons IT TELLS US NOTHING. Trent --- agi Archives:

AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you don't like mirror neurons, forget them. They are not necessary for my argument. Trent wrote Oh you just hit my other annoyance. How does that work? Mirror neurons IT TELLS US NOTHING. Trent --- agi Archives:

Re: [agi] Re: Defining AGI

2008-10-18 Thread Mike Tintner
Trent: Oh you just hit my other annoyance. How does that work? Mirror neurons IT TELLS US NOTHING. Trent, How do they work? By observing the shape of humans and animals , (what shape they're in), our brain and body automatically *shape our bodies to mirror their shape*, (put

Re: [agi] Re: Defining AGI

2008-10-18 Thread David Hart
On Sat, Oct 18, 2008 at 9:48 PM, Mike Tintner [EMAIL PROTECTED]wrote: [snip] We understand and think with our whole bodies. Mike, these statements are an *enormous* leap from the actual study of mirror neurons. It's my hunch that the hypothesis paraphrased above is generally true, but it is

Re: [agi] Re: Defining AGI

2008-10-18 Thread Ben Goertzel
I am well aware that building even *virtual* embodiment (in simulated worlds) is hard However, creating human-level AGI is **so** hard that doing other hard things in order to make the AGI task a bit easier, seems to make sense!! One of the things the OpenCog framework hopes to offer AGI

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner
Trent, I should have added that our brain and body, by observing the mere shape/outline of others bodies as in Matisse's Dancers, can tell not only how to *shape* our own outline, but how to dispose of our *whole body* - we transpose/translate (or flesh out) a static two-dimensional body

Re: [agi] Re: Defining AGI

2008-10-18 Thread wannabe
I do appreciate the support of embodiment frameworks. And I really get the feeling that Matthias is wrong about embodiment because when it comes down to it, embodiment is an assumption made by people when judging if something is intelligent. But that's just me. And what's up with language as

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I do not agree that body mapping is necessary for general intelligence. But this would be one of the easiest problems today. In the area of mapping the body onto another (artificial) body, computers are already very smart: See the video on this page: http://www.image-metrics.com/ -Matthias

AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
There is no big depth in the language. There is only depth in the information (i.e. patterns) which is transferred using the language. Human language seems so magical because it is so ambiguous at a first view. And just these ambiguities show that my model of transferred patterns is right. An

Re: [agi] Re: Defining AGI

2008-10-18 Thread Mike Tintner
David:Mike, these statements are an *enormous* leap from the actual study of mirror neurons. It's my hunch that the hypothesis paraphrased above is generally true, but it is *far* from being fully supported by, or understood via, the empirical evidence. [snip] these are all original

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner
Matthias: I do not agree that body mapping is necessary for general intelligence. But this would be one of the easiest problems today. In the area of mapping the body onto another (artificial) body, computers are already very smart: See the video on this page: http://www.image-metrics.com/

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think here you can see that automated mapping between different faces is possible and the computer can smoothly morph between them. I think, the performance is much better than the imagination of humans can be. http://de.youtube.com/watch?v=nice6NYb_WA -Matthias Mike Tintner wrote

Re: AW: [agi] Re: Defining AGI

2008-10-18 Thread wannabe
Matthias wrote: There is no big depth in the language. There is only depth in the information (i.e. patterns) which is transferred using the language. This is a claim with which I obviously disagree. I imagine linguists would have trouble with it, as well. And goes on to conclude: Therefore

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner
Matthias: I think here you can see that automated mapping between different faces is possible and the computer can smoothly morph between them. I think, the performance is much better than the imagination of humans can be. http://de.youtube.com/watch?v=nice6NYb_WA Matthias, Perhaps we're

Re: [agi] META: A possible re-focusing of this list

2008-10-18 Thread Steve Richfield
Ben, First, note that I do NOT fall into the group that says that you can't engineer digital AGI. However, I DO believe that present puny computers are not up to the task, and some additional specific research (that I have previously written about here) needs to be done before programming can be

AW: AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you can build a system which understands human language you are still far away from AGI. Being able to understand the language of someone else does no way imply to have the same intelligence. I think there were many people who understood the language of Einstein but they were not able to create

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Harry Chesley
On 10/18/2008 9:27 AM, Mike Tintner wrote: What rational computers can't do is find similarities between disparate, irregular objects - via fluid transformation - the essence of imagination. So you don't believe that this is possible by finding combinations of abstract shapes (lines,

Re: [agi] META: A possible re-focusing of this list

2008-10-18 Thread Ben Goertzel
Steve, Ignoring your overheated invective, I will make one more attempt to address your objections. **If and only if** you will be so kind as to summarize them in a compact form in a single email. If you give me a numbered list of your objections against my approach to AGI and other similar

[agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Eric Burton
http://www.popularmechanics.com/technology/industry/4287680.html?series=60 :O --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

[agi] Re: Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Eric Burton
It looks like if Sim City isn't a lie then machines -will- bootstrap themselves to sentience but -will not- reach human intelligence. I'm not too sure what this means. Maybe that we'll never see a faithful duplication of a characteristically human distribution of abilities in a machine. But I

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think it does involve being confronted with two different faces or objects randomly chosen/positioned and finding/recognizing the similarities between them. If you have watched the video carefully then you have heard that they have spoken from automated algorithms which do the matching. On

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: Anyway, I think it's reasonable to doubt my story about how RSI will be achieved. All I have is a plausibility argument, not a proof. What got my dander up about Matt's argument was that he was claiming to have a debunking of

Re: AW: [agi] Re: Defining AGI

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: Therefore I think, the ways towards AGI mainly by studying language understanding will be very long and possibly always go in a dead end. No. Language modeling is AI-complete.

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Ben Goertzel
Matt wrote: I think the source of our disagreement is the I in RSI. What does it mean to improve? From Ben's OpenCog roadmap (see http://www.opencog.org/wiki/OpenCogPrime:Roadmap ) I think it is clear that Ben's definition of improvement is Turing's definition of AI: more like a human. In

Re: [agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Bob Mottram
2008/10/18 Eric Burton [EMAIL PROTECTED]: http://www.popularmechanics.com/technology/industry/4287680.html?series=60 Some thoughts on this: http://streebgreebling.blogspot.com/2008/10/will-wright-on-ai.html --- agi Archives:

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner
Matthias, When a programmer (or cameraman) macroscopic(ally) positions two faces - adjusting them manually so that they are capable of precise point-to-point matching, that proceeds from an initial act of visual object recognition - and indeed imagination, as I have defined it. He will

Re: [agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread BillK
On Sat, Oct 18, 2008 at 8:28 PM, Bob Mottram wrote: Some thoughts on this: http://streebgreebling.blogspot.com/2008/10/will-wright-on-ai.html I like his first point: MACHINES WILL NEVER ACHIEVE HUMAN INTELLIGENCE According to Wright, one of the main benefits of the quest for AI is a better

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
After the first positioning there is no point to point matching at all. The main intelligence comes from the knowledge base of hundreds of 3d scanned faces. This is a huge vector space. And it is no easy task to match a given picture of a face with a vector(=face) within the vector space. The

Re: [agi] Re: Defining AGI

2008-10-18 Thread David Hart
Mike, I think you won't get a disagreement in principle about the benefits of melding creativity and rationality, and of grokking/manipulating concepts in metaphorical wholes. But really, a thoughtful conversation about *how* the OCP design addresses these issues can't proceed until you've RTFBs.

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread William Pearson
2008/10/18 Ben Goertzel [EMAIL PROTECTED]: 1) There definitely IS such a thing as a better algorithm for intelligence in general. For instance, compare AIXI with an algorithm called AIXI_frog, that works exactly like AIXI, but inbetween each two of AIXI's computational operations, it

[agi] List of Feasibility Objections

2008-10-18 Thread Steve Richfield
Everyone, Ben has made a really wonderful offer here: On 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: I will make one more attempt to address your objections. **If and only if** you will be so kind as to summarize them in a compact form in a single email. If you give me a numbered list

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Abram Demski
On Sat, Oct 18, 2008 at 3:38 PM, Mike Tintner [EMAIL PROTECTED] wrote: Matthias, When a programmer (or cameraman) macroscopic(ally) positions two faces - adjusting them manually so that they are capable of precise point-to-point matching, that proceeds from an initial act of visual object

Re: [agi] List of Feasibility Objections

2008-10-18 Thread Abram Demski
Non-Constructive Logic: Any AI method that approximates AIXI will lack the human capability to reason about non-computable entities. On Sat, Oct 18, 2008 at 8:20 PM, Steve Richfield [EMAIL PROTECTED] wrote: Everyone, Ben has made a really wonderful offer here: On 10/18/08, Ben Goertzel

[agi] Thoughtware.TV: Can Robots 'think' Like Humans?

2008-10-18 Thread Andrés Colón
Dear friends, This recent Thoughtware.TV addition might be of interest to you. It is a BBC video, contributed by Arlind, on the Turing Test. Our link: http://www.thoughtware.tv/videos/watch/2945-Can-Robots-think-Like-Humans Theirs: http://news.bbc.co.uk/2/hi/science/nature/7666836.stm There

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: The limitations of your imagination are striking ;-p I imagine a future where AGI sneaks past us, like where Google can understand 50% of 8 word long natural language questions this year, and 60% next year. Where they gradually

Re: [agi] List of Feasibility Objections

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote: Non-Constructive Logic: Any AI method that approximates AIXI will lack the human capability to reason about non-computable entities. Then how is it that humans can do it? According to the AIXI theorem, if we can do this, it makes us

[agi] constructivist issues

2008-10-18 Thread Abram Demski
Matt, I suppose you don't care about Steve's do not comment request? Oh well, I want to discuss this anyway. 'Tis why I posted in the first place. No, I do not claim that computer theorem-provers cannot prove Goedel's Theorem. It has been done. The objection applies specifically to AIXI-- AIXI

Re: AW: [agi] Re: Defining AGI

2008-10-18 Thread Terren Suydam
Nice post. I'm not sure language is separable from any kind of intelligence we can meaningfully interact with. It's important to note (at least) two ways of talking about language: 1. specific aspects of language - what someone building an NLP module is focused on (e.g. the rules of English