On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Well, my point was that maybe the mistake is use of additional
>> language constructions and not their absence? You yourself should be
>> able to emulate anything in lambda-calculus (you can add interpreter
>> for any extension as a part of a program), and so should your AI, if
>> it's to ever learn open-ended models.
>
> Would you choose to program in raw lambda calculus if you were writing
> a Web server or an e-mail client? If not, why would you choose to do
> so when writing an AGI? It's not like it's an easier problem to start
> with -- it's harder, so being handicapped with bad tools is an even
> bigger problem.
>

I'd write it in a separate language, developed for human programmers,
but keep the language with which AI interacts minimalistic, to
understand how it's supposed to grow, and not be burdened by technical
details in the core algorithm or fooled by appearance of functionality
where there is none but a simple combination of sufficiently
expressive primitives. Open-ended learning should be open-ended from
the start. It's a general argument of course, but you need specifics
to fight it.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to