On Sat, Aug 30, 2008 at 9:20 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
> My view is a little different.  I think these answers are going to come out
> of a combination of theoretical advances with lessons learned via
> experimenting with early-stage AGI systems, rather than being arrived at
> in-advance based on pure armchair theorization...
>

That was also my initial position, before I learned that there is in
fact a domain for productive armchair theorizing in this case, and
that it seems to be not-that-directly connected to the technical AGI
stuff. There is interplay between FAI and AGI in the fundamental
concepts. Thinking about how to do what you want may inform the
design, and thinking how to implement powerful optimization may inform
the way communication of intention needs to be framed. Old AI
fallacies and poorly understood concepts employed in thinking about AI
are problems for both FAI and AGI, for somewhat separate reasons, but
common problems all the same. And then there is a part where you start
at your own side, as a human, that is so removed from implementation
of AGI to make specific design issues irrelevant to the problem that
needs solving in any case. True, there are questions that need to
wait, but there are others that don't. It might even turn out that a
working AGI design invented without considering the FAI questions will
only be capable of going FOOM in an arbitrary direction, although I
doubt it, and think that working on AGI without rigorously
understanding FAI may still be worthwhile, if you know where to stop
should you succeed.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to