I can't see how this can be true. Let us look at a few things.

Theory of languages. Todos hablnt espanol! Well let us analyse what you
learnt. You learnt about agreement. Spanish nouns are masculine or feminine,
adjectives (usually) come after the noun. If the noun is feminine they end
in "a". There are tenses. Verbs have stems. Vend = sell (-ir Spanish -re
French for the infinitive).

I feel the people at Google Translate never did Spanish. The genders for one
thing are all over the pace. In Arabic (and Russian) the verb "to be" is not
inserted in the translation of attributives. The attributive ending (in
Russian) is wrong to boot.

What am I coming round to? It is this. If you have heuristics and you can
reliably reconstruct the grammar of a language from bilingual text, then OK
your grammar is AI derived. Google Translate is not able to do this. It
would benefit no end by having some "hard wired" grammatical rules. In both
French and German gender is all over the place.

Let us now take a slightly more philosophical view. What do we mean by
"Intelligence". Useful intelligence involves memory and knowledge. These are
hard wired. Something like *Relativity* is not simply a matter of common
sense. To do Science you not only need the ability to manipulate knowledge,
you also need certain general principles.

Let me return to language. If we are told that Spanish is fairly regular.
Infinitives are -ar, -ir, -er and we know how they decline, we can work out
the morphology of any Spanish verb from a few examples. Arabic is a little
bit more complicated in that there are prefixes and suffixes. There are
morphology types and again given a few examples you can work out what they
are.

Looking at this question biologically we find that certain knowledge is
layed down at a particular age. Language is picked up in childhood. The Arab
child will understand morphology at the ago of 5 or 6. He won't call it
that, it will be just that certain things will sound wrong. This being so
there are aspects of our behaviour that are layed down and are "hardwired"
other things which are not.

I fact we would view a system as being intelligent if it could bring to bear
a large amount of knowledge onto the problem.


  - Ian Parker

On 27 June 2010 22:36, M E <[email protected]> wrote:

>  I sketched a graph the other day which represented my thoughts on the
> usefulness of hardcoding knowledge into an AI.  (Graph attached)
>
> Basically, the more hardcoded knowledge you include in an AI, of AGI, the
> lower the overall intelligence it will have, but that faster you will reach
> that value.  I would include any real AGI to be toward the left of the graph
> with systems like CYC to be toward the right.
>
> Matt
>
> ------------------------------
> The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get
> started.<http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to