I don't want to knock anybody else's approach because I don't know
what will work.  But, if there were a kind of overarching framework
that subsumes all the ways that people like to start AGI it might
help.

Mike

On 7/16/13, Mike Tintner <[email protected]> wrote:
> Mike A:
>> my book argues for a philosophy-first starting
>> point
> It would be good if s.o. has got the message which Deutsch and I agree
> on. A "philosophy first" approach means simply "first define the kind of
> problems an AGI must solve."  Or: "first define what intelligence is
> [incl. both the two halves of intelligence, represented by  AGI and
> narrow AI] "  In fact, that was many people's first instinct. There
> was   a v. extended discussion a few years back here about the nature of
> intelligence. But basically, everyone gave up in the end and ploughed on
> with their architectures -  or as Ben concluded, "I'll know it when I
> see it" . (He hasn't seen it yet).
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to