David Jones wrote:
> I also want to mention that I develop solutions to the toy problems with the 
> real problems in mind. I also fully intend to work my way up to the real 
> thing by incrementally adding complexity and exploring the problem well at 
> each level of complexity.

A little research will show you the folly of this approach. For example, the 
toy approach to language modeling is to write a simplified grammar that 
approximates English, then write a parser, then some code to analyze the parse 
tree and take some action. The classic example is SHRDLU (blocks world, 
http://en.wikipedia.org/wiki/SHRDLU ). Efforts like that have always stalled. 
That is not how people learn language. People learn from lots of examples, not 
explicit rules, and they learn semantics before grammar.

For a second example, the toy approach to modeling logical reasoning is to 
design a knowledge representation based on augmented first order logic, then 
write code to implement deduction, forward chaining, backward chaining, etc. 
The classic example is Cyc. Efforts like that have always stalled. That is not 
how people reason. People learn to associate events that occur in quick 
succession, and then reason by chaining associations. This model is built in. 
People might later learn math, programming, and formal logic as rules for 
manipulating symbols within the framework of natural language learning.

For a third example, the toy approach to modeling vision is to segment the 
image into regions and try to interpret the meaning of each region. Efforts 
like that have always stalled. That is not how people see. People learn to 
recognize visual features that they have seen before. Features are made up of 
weighted sums of lots of simpler features with learned weights. Features range 
from dots, edges, color, and motion at the lowest levels, to complex objects 
like faces at the higher levels. Vision is integrated with lots of other 
knowledge sources. You see what you expect to see.

The common theme is that real AGI consists of a learning algorithm, an opaque 
knowledge representation, and a vast amount of training data and computing 
power. It is not an extension of a toy system where you code all the knowledge 
yourself. That doesn't scale. You can't know more than an AGI that knows more 
than you. So I suggest you do a little research instead of continuing to repeat 
all the mistakes that were made 50 years ago. You aren't the first person to do 
these kinds of experiments.

 -- Matt Mahoney, [email protected]




________________________________
From: David Jones <[email protected]>
To: agi <[email protected]>
Sent: Mon, June 28, 2010 4:00:24 PM
Subject: Re: [agi] A Primary Distinction for an AGI

I also want to mention that I develop solutions to the toy problems with the 
real problems in mind. I also fully intend to work my way up to the real thing 
by incrementally adding complexity and exploring the problem well at each level 
of complexity. As you do this, the flaws in the design will be clear and I can 
retrace my steps to create a different solution. The benefit to this strategy 
is that we fully understand the problems at each level of complexity. When you 
run into something that is not accounted, you are much more likely to know how 
to solve it. Despite its difficulties, I prefer my strategy to the alternatives.

Dave


On Mon, Jun 28, 2010 at 3:56 PM, David Jones <[email protected]> wrote:

>That does not have to be the case. Yes, you need to know what problems you 
>might have in more complicated domains to avoid developing completely useless 
>theories on toy problems. But, as you develop for full complexity problems, 
>you are confronted with several sub problems. Because you have no previous 
>experience, what tends to happen is you hack together a solution that barely 
>works and simply isn't right or scalable because we don't have a full 
>understanding of the individual sub problems. Having experience with the full 
>problem is important, but forcing yourself to solve every sub problem at once 
>is not a better strategy at all. You may think my strategies has flaws, but I 
>know that and still chose it because the alternative strategies are worse.
>
>Dave
>
>
>
>On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace <[email protected]> 
>wrote:
>
>On Mon, Jun 28, 2010 at 4:54 PM, David Jones <[email protected]> wrote:
>>>>> But, that's why it is important to force oneself to solve them in such a 
>>>>> way that it IS applicable to AGI. It doesn't mean that you have to choose 
>>>>> a problem that is so hard you can't cheat. It's unnecessary to do that 
>>>>> unless you can't control your desire to cheat. I can.
>>
>>That would be relevant if it was entirely a problem of willpower and
>>>>self-discipline, but it isn't. It's also a problem of guidance. A real
>>>>problem gives you feedback at every step of the way, it keeps blowing
>>>>your ideas out of the water until you come up with one that will
>>>>actually work, that you would never have thought of in a vacuum. A toy
>>>>problem leaves you guessing, and most of your guesses will be wrong in
>>>>ways you won't know about until you come to try a real problem and
>>>>realize you have to throw all your work away.
>>
>>>>Conversely, a toy problem doesn't make your initial job that much
>>>>easier. It means you have to write less code, sure, but what of it?
>>>>That was only ever the lesser difficulty. The main reason toy problems
>>>>are easier is that you can use lower grade methods that could never
>>>>scale up to real problems -- in other words, precisely that you can
>>>>'cheat'. But if you aren't going to cheat, you're sacrificing most of
>>>>the ease of a toy problem, while also sacrificing the priceless
>>>>feedback from a real problem -- the worst of both worlds.
>>
>>
>>
>>>>-------------------------------------------
>>>>agi
>>>>Archives: https://www.listbox.com/member/archive/303/=now
>>>>RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>>>Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>
>>
>>Powered by Listbox: http://www.listbox.com
>>
>

agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to