David Jones wrote:
> Natural language requires more than the words on the page in the real world. 
> Of course that didn't work.

Any knowledge that can be demonstrated over a text-only channel (as in the 
Turing test) can also be learned over a text-only channel.

> Cyc also is trying to store knowledge about a super complicated world in 
> simplistic forms and also requires more data to get right.

Cyc failed because it lacks natural language. The vast knowledge store of the 
internet is unintelligible to Cyc. The average person can't use it because they 
don't speak Cycl and because they have neither the ability nor the patience to 
translate their implicit thoughts into augmented first order logic. Cyc's 
approach was understandable when they started in 1984 when they had neither the 
internet nor the vast computing power that is required to learn natural 
language from unlabeled examples like children do.

> Vision and other sensory interpretaion, on the other hand, do not require 
> more info because that is where the experience comes from.

Without natural language, your system will fail too. You don't have enough 
computing power to learn language, much less the million times more computing 
power you need to learn to see.

 -- Matt Mahoney, [email protected]




________________________________
From: David Jones <[email protected]>
To: agi <[email protected]>
Sent: Mon, June 28, 2010 9:28:57 PM
Subject: Re: [agi] A Primary Distinction for an AGI


Natural language requires more than the words on the page in the real world. Of 
course that didn't work.
Cyc also is trying to store knowledge about a super complicated world in 
simplistic forms and also requires more data to get right.
Vision and other sensory interpretaion, on the other hand, do not require more 
info because that is where the experience comes from.
On Jun 28, 2010 8:52 PM, "Matt Mahoney" <[email protected]> wrote:
>
>
>David Jones wrote:
>> I also want to mention that I develop solutions to the toy problems with the 
>> re...
>A little research will show you the folly of this approach. For example, the 
>toy approach to language modeling is to write a simplified grammar that 
>approximates English, then write a parser, then some code to analyze the parse 
>tree and take some action. The classic example is SHRDLU (blocks
> world, http://en.wikipedia.org/wiki/SHRDLU ). Efforts like that have always 
> stalled. That is not how people learn language. People learn from lots of 
> examples, not explicit rules, and they learn semantics before grammar.
>
>
>For a second example, the toy approach to modeling logical reasoning is to 
>design a knowledge representation based on augmented first order logic, then 
>write code to implement deduction, forward chaining, backward chaining, etc. 
>The classic example is Cyc. Efforts like that have always stalled. That is not 
>how people reason. People learn to associate events that occur in quick 
>succession, and then reason by chaining associations. This model is built in. 
>People might later learn math, programming, and formal logic as rules for 
>manipulating symbols within the framework of natural language learning.
>
>
>For a third example, the toy
> approach to modeling vision is to segment the image into regions and try to 
> interpret the meaning of each region. Efforts like that have always stalled. 
> That is not how people see. People learn to recognize visual features that 
> they have seen before. Features are made up of weighted sums of lots of 
> simpler features with learned weights. Features range from dots, edges, 
> color, and motion at the lowest levels, to complex objects like faces at the 
> higher levels. Vision is integrated with lots of other knowledge sources. You 
> see what you expect to see.
>
>
>The common theme is that real AGI consists of a learning algorithm, an opaque 
>knowledge representation, and a vast amount of training data and computing 
>power. It is not an extension of a toy system where you code all the knowledge 
>yourself. That doesn't scale. You can't know more than an AGI that knows more 
>than you. So I suggest you do a little research instead of continuing to
> repeat all the mistakes that were made 50 years ago. You aren't the first 
> person to do these kinds of experiments.
>
>
> -- Matt Mahoney, [email protected]
>
>
>
>
>
________________________________
From: David Jones <[email protected]>
>To: agi <[email protected]>
>Sent: Mon, June 28, 2010 4:00:24 PM
>
>Subject: Re: [agi] A Primary Distinction for an AGI
>
>I also want to mention that I develop solutions to the toy problems with the 
>real problems in mind....
>On Mon, Jun 28, 2010 at 3:56 PM, David Jones <[email protected]> wrote:
>>
>>> That does not have to be the case. Yes, you need to know what problems you 
>>> might have in more co...
>>
>>
>>
>>> On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace 
>>> <[email protected]> wrote:
>>>>
>>>>> On Mon, Jun 28, 2010 at 4:54 PM, David Jones <[email protected]> 
>>>>> wrote:
>>>>>>>> > But, that's w...
>>>>>
>>>>>
>>>>> -------------------------------------------
>>>>> agi
>>>>> Archives: https://www.listbox.com/mem...
>>>
>>>Modify Your Subscription: https://www.listbox.com/member/?&;
>>>
>>>>> Powered by Listbox: http://www.listbox.com
>>
>
>agi | Archives | Modify Your Subscription
>>
>agi | Archives  > | Modify > Your Subscription  
agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to