>> However, looking at:
>>
>> http://multiverseaccordingtoben.blogspot.com/2011/06/why-is-evaluating-partial-progress.html
>>
>> ...certainly suggests that Ben has some rather odd ideas about testing.
>
> It's called "wishful thinking". He proposes "cognitive synergy" as an
> excuse for not testing. When all of the components are put together,
> it will magically work. It's just intuition, of course, not backed by
> any evidence. In fact, all of the evidence points the other way. The
> most powerful models in machine learning are ensemble models. You
> combine lots of predictors and get more accurate predictions. If you
> remove half of them, then you still get most of the accuracy. Each
> model can be tested independently of the others, because that's how
> they work in practice.

I understand about ensemble machine learning perfectly well, and have
used that technique frequently in domains like bioinformatics, news analysis
and finance...

I don't think that's a good model for human-level AGI....   Certainly it bears
very little resemblance to the known workings of the human mind or brain.

I don't have odd ideas about testing, generically.  I have a different theory of
mind than you do, which leads to different theories about developing and testing
*general intelligence* systems than you have.

As the link you give above states, my ideas about testing proto-AGI systems
emerge directly from my theory of mind....  This theory of mind was developed
long before I started working on practical proto-AGI systems, and not as an
"excuse" for anything...

-- Ben G


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to