On Wed, Oct 22, 2008 at 7:22 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> However, it's possible that working with Lojban could help cut through
> the following "chicken and egg" problem:
>
> -- if your AI understands the world, then it can disambiguate language
>
> -- if your AI can disambiguate language, then it can learn enough from
> language to understand the world
>
> Potentially using Lojban you could teach the AI enough about the world,
> that it could then use this understanding to guide its disambiguation
> of natural language
>
> This is not the approach I'm currently taking, but it doesn't seem
> unreasonable to me.
>

The problem is to gradually improve overall causal model of
environment (and its application for control), including language and
dynamics of the world. Better model allows more detailed experience,
and so through having a better inbuilt model of an aspect of
environment, such as language, it's possible to communicate richer
description of other aspects of environment. But it's not obvious that
bandwidth of experience is the bottleneck here. It's probably just
limitations of the cognitive algorithm that simply can't efficiently
improve its model, and so feeding it more experience through tricks
like this is like trying to get a hundredfold speedup in the
O(log(log(n))) algorithm by feeding it more hardware. It should be
possible to get a proof-of-concept level results about efficiency
without resorting to Cycs and Lojbans, and after that they'll turn out
to be irrelevant.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to