I'm just beginning a project developing a programming language called 
Basic English, inspired by Terry Winograd's PhD, but these few are high on 
my list:

- Easy to teach new abstract knowledge and problem solving methods.  
  Approximate commonsense language explanations as a "programming 
  language" would be one natural way for most people to teach, although 
  many different ways and representations should be incorporated for 
  different domains of expertise.

- A way to switch between representations and thinking processes when one 
  set of methods fails.  This would keep expert knowledge in one domain 
  connected to expert knowledge from other domains.

- Credit-assignment for adding to and modifying representations after 
  experiencing successes and failures at accomplishing goals.  (Not just 
  reinforcement learning for all active processes after winning or losing 
  a game, for example.)

There are many different ways to implement all of these ideas, but they 
are often approached independently.

Does anyone have any pointers for how to use English to program?

Terry's thesis is good, but it's not really a programming language.  I'd 
like to start with primitives in a Computer Science mental realm, where 
words like Variable and Value and Define and Let are understood as basic 
CS primitives, but then I want to add the capability to define panalogies 
to other mental realms where the verbs and nouns are a little different, 
until many different mental realms can be described and methods of 
programming can be abstracted to use more powerful mental realms of 
programming that we commonly employ in social communication.

I'm thinking that it will begin with only understanding one type of verb: 
the command form of verbs, such as [Let a new variable be called X.  Let 
the value of X be the phrase "A dog".  When I say to tell me the value of 
the variable X, print the value of X.]

I hope it will be based on a system will multiple layers of reflective 
critics and selectors.  Back to coding...

Brian Smith wrote a 750 page PhD on reflection... Gerry Sussman wrote a 
good example of reflective debugging in Blocks World...  Terry Winograd 
coded a pretty good language understanding system for rigid types of 
language, but the user wasn't able to actually program it through the 
language to understand new language... 

Bo

On Wed, 2 May 2007, Matt Mahoney wrote:

) It seems like a lot of people are already highly motivated to work on AGI, and
) have been for years.  The real problem is that everyone is working indendently
) because (1) you are not going to convince anyone that somebody else's approach
) is better, and (2) everyone has a different idea of what AGI is.
) 
) The real problem is not AGI, but replacing human labor when natural language,
) vision, navigation, music processing, etc. is required.  The motive is money. 
) The solution may or may not resemble something that goes on in the human
) brain.
) 
) I think that Google will solve many of these problems.  Then we will argue
) pointlessly about whether or not it is AGI.
) 
) 
) 
) --- William Pearson <[EMAIL PROTECTED]> wrote:
) 
) > My current thinking is that it will take lots of effort by multiple
) > people, to take a concept or prototype AGI and turn into something
) > that is useful in the real world. And even one or two people worked on
) > the correct concept for their whole lives it may not produce the full
) > thing, they may hit bottle necks in their thinking or lack the proper
) > expertise to build the hardware needed to make it run in anything like
) > real time. Building up a community seems the only rational way
) > forward.
) > 
) > So how should we go about trying to convince each other we have
) > reasonable concepts that deserve to be tried? I can't answer that
) > question as I am quite bad at convincing others of the interestingness
) > of my work. So I'm wondering what experiments, theories or
) > demonstrations would convince you that someone else was onto
) > something?
) > 
) > For me an approach should have the following feature:
) > 
) > 1) The theory not completely divorced from brains
) > 
) > It doesn't have to describe everything about human brains, but you can
) > see how roughly a similar sort of system to it may be running in the
) > human brain and can account for things such as motivation, neural
) > plasticity.
) > 
) > 2) It takes some note of theoretical computer science
) > 
) > So nothing that ignores limits to collecting information from the
) > environment or promises unlimited bug free creation/alteration of
) > programming.
) > 
) > 3) A reason why it is different from normal computers/programs
) > 
) > How it deals with meaning and other things. If it could explain
) > conciousness in some fashion, I would have to abandon my own theories
) > as well.
) > 
) > I'm sure there are other criteria I have as well, but those three are
) > the most obvious. As you can see I'm not too interested in practical
) > results right at the moment. But what about everyone else?
) > 
) >   Will Pearson
) 
) 
) -- Matt Mahoney, [EMAIL PROTECTED]
) 
) -----
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?&;
) 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to