The limited goal I am pursuing is not to answer many questions but to be
able to show that it is capable of understanding (or acting like it is
capable of understanding) something.  To do this it not only has to be able
to search for a piece of information that matches some key words, some
grouping of key words or some learned categorization based on the inclusion
and/or sequencing of key words, but it also has to be able to interpret
something from the (limited) domain of the distributed data.  The key here,
I believe, is that some data (like words) can be interpreted as
instructions (for the program to subsequently use) and these abilities can
lead to some primitive judgement.  So, my opinion is that the early efforts
at limited domain AI did not work because the programmers did not have the
insight (or the computing power) to be able to write programs that could
learn to 'read' sentences and interpret them as actions for the program
itself to take. Of course this ability would lead to more complexities but
given the fact that I completely accept that my program would - at best
even if it did work - be seriously limited I should have no problem
limiting that particular kind of complexity.

If it is indeed possible for me to write a crude version of this program
it might not be the first program that showed some ability to interpret a
sentence that was not based on an severe uniqueness constraint so that it
could use that interpretation as guidance toward an action but it would be
among the first that was carefully crafted to do so.

There are quite a few characteristics that I believe are necessary for
'higher' general intelligence (even if limited higher intelligence.) The
ability to learn to interpret a sentence (or a concept) as guidance for its
own actions is only one of them. It has to be able to work with non-unique
symbols and it has to be able to learn new strategies by developing new
types so that it can create virtual sub-programs based on its learning. And
it has to be able to utilize judgement which has been  learned, which it
has read (or 'read' from its observations), and which it can derive from
experience.

Jim Bromer

On Tue, Dec 11, 2012 at 8:32 PM, Matt Mahoney <[email protected]>wrote:

> On Tue, Dec 11, 2012 at 3:11 PM, Jim Bromer <[email protected]> wrote:
> >
> > I was just asking Google and Bing questions and I was surprised at how
> well they did.
>
> Yes, it is amazing to see how far search engines have come, and how
> close they are to AI. One thing that they have and you don't is enough
> computing power to keep a copy of the internet in RAM.
>
> --
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10561250-470149cf
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to