Re: [agi] The concept of a KBMS

2006-11-08 Thread YKY (Yan King Yin)

On 11/7/06, James Ratcliff [EMAIL PROTECTED] wrote:  Yan,  Do you have a version of the book layout that is all on one page, or PDF or anything?
  I would like ot print the whole thing off and look over it in more detail. Also lots of broken links, run a link checker, the GO link on the front page doesnt work.  James Ratcliff


Thanks for telling me about the links, I'll fix it ASAP.Ican't generatea printer-friendly version because the web pages are changing constantly and I'm only uploading the pages manually. I willsubmit a paper to some journals when Geniform is mature and stable enough, and when I have the parser as demo. =)


YKY

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
Yes. All of the above.We have already heard the statement from all around I believe, and seen the results that show that one single algorithm is just not goign to work, and its unreasonable to think it would. So then its really down to breaking up the parts, defining them precisely, and determining what would be the best approach to each.I specialize in KR and NLP, so for the most part, basic facts and information can easily be stored in a Extended Logic Format, using a Frame System, with a large extended Ontology.I believe this will need to have confidences, and probabilities as well.For vision and most all motor-skills and representation, Nueral Networks may be the best way to go, as they are specific and cannot be easily put to words, but can be learned based on examples and end results goals.Also much of vision and audio processing will use NN as well, though it will need to also interact with the ontology KR system to gather
 contextual inforamtion about the current environment.Procedural Knowledge will need to be stored in a different manner, possibly using some rules programming, as well as the ontology KR.JamesJohn Scanlon [EMAIL PROTECTED] wrote:   The crux of the problem is this: what should  be the fundamental elements used for knowledge representation. Should they  be statements in predicate or term logic, maybe with the addition of  probabilities and confidence? Should they be neural-net-type learned  functional mappings? Or should they be some kind of modeling system that  can replicate the three-dimensional, temporal
 physical world (like a global  weather-modeling system)? These are just some of the options, but isn't  this choice the foundation for creatingreal understanding  inAI?  Several people wrote:   James Below Shouls be Jef, butI will respond as wellOrig Quotes: But the computer stilldoesn't understand thesentence, because it  doesn't know what
 cats, mats and the act ofsitting _are_. (The best  test of such understanding is not language -it's having the  computer draw an animation of the action.)Russell, I agree, butit might be clearer if we point out that humansdon't understand theworld either. We just process these symbols withina more encompassingcontext.- JefMe, James: Understand is probably a redflag word, for computers and humans alike. We have no nice judge of whatis understood, and I try not to use that term generally, as it devolves intovague phsycho talk, and nothing concrete.But basically, acomputer can do one of two things to "show" that it has "understood"something; 1. either show its internal
 representation. You said cat,I know that cat is a mammal that is blah, and blah, and does blah, some cats Iknow are blah.2. It acts upon this information, "Bring me the cat"is followed by the robot bringing the cat to you, it obviously "understands"what you mean.I believe with a very rich frame system of memory thatwill start a fairly good understanding of "What" somethings "means" and allowsome basic "understanding".At the basest level a "cat" can only mean acertain few things, maybe using the WordNet ontology for filtering thatout.The depending on context and usage, we can possibly narrow it down,and use the Frames for some basic pattern matching to narrow it down to theone.And, maybe if it cant be narrowed successfully, something else shouldhappen, either model internally both or multiple objects / processes, orget outside intervention where available.We should remember that
 there arealmost always humans around, and SHOULD be used in my opinion.Either ifthey are standing by the robot, then they can be quizzed directly, or if it isnot a immediate deceision to be made, ask them via email or a phone call orsomething, and try to learn that information given so next time it will nothave to ask.EX: "Bring me the cat." Confusion in the AI,seeing 4 cats in front of it. AI: Which cat do you want?resolve abiguity thru interface.James RatcliffEricBaum wrote:   James  Jef Allbright <[EMAIL PROTECTED]>wrote: Russell WallaceJames  wrote: Syntactic ambiguity isn't the problem. The reason  computers don't understand English is nothing to do with syntax,  it's because
 they don't understand the world.  It's easy to parse "The cat sat on the mat" into sit  cat James on mat past   But the computer still doesn't understand the sentence,  because it doesn't know what cats, mats and the act of sitting  _are_. (The best test of such understanding is not language -  it's having the computer draw an animation of the  action.)James Russell, I agree, but it might be clearer if we  point out thatJames humans don't understand the world either. We  just process theseJames symbols within a more encompassing  context.James, I would like to know what you mean by  "understand".In my view, what humans do is 

Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
Matt: To parse English you have to know that pizzas have pepperoni, that demonstrators advocate violence, that cats chase mice, and so on. There is no neat, tidy algorithm that will generate all of this knowledge. You can't do any better than to just write down all of these facts. The data is not compressable.James: You CAN actually, simply because there is patterns, anytime there are patterns, there is regularity, and the ability to compress things. And those things are limited, even if on a super-large scale. The problem with that is the irregular parts, which have to be handled, and the amount of bad data, which has to be handled.But a simple example isate a pepperoni pizza ate a tuna pizzaate a VEGAN SUPREME pizzaate a Mexican pizzaate a pineapple pizzaAnd
 we can see right off, that these are different types of pizza topping, and we can compress that into a frame easilyFrame Pizza: can have Toppings: pepperoni, tuna, pineapple can be Type: vegan supreme, mexicanThis does take some work, and does require some good data, but can be done.We can take that further to gather probabilities, and confidences about the Pizza frame, such that we can determine that a pepperoni pizza is the most likely if a random pizza is ordered.This does not give a perfect collection of information, but alot can be garnered just from this. This does not solve the AI problem, but does give us a nice building block of Knowledge to start working with. This is a much preferred method than hand-coding each piece as Cyc has seen, and they are currently coding and using many algorithms now that take advantage of statistical NLP and google to assist and suggest answers, and check the
 answers they have in place.There is a simple pattern between Nouns and Verbs as well that can be taken out and extracted with relative ease, and also between Adj and Nouns, and Adv and Verbs.Ex: The dog eats, barks, growls, sniffs, attacks, alerts.That gives us an initial store of information about a dog frame.Then if given Rover barked at the mailmen. we can programmatically narrow the possibilities about what Actor can fulfill the "bark" role, and see that dogs bark, and are most likely to bark at the mailman, and give a probability, and confidence.One problem I have with you task of text compression is the stricture that it retain exactly the same text, as opposed to exactly the same Information.For a computer science data transmission issue the first is important, but for an AI issue the latter is more important.The dog sniffed the shoes. and The dog smelled the shoes. Is so very close in meaning as to be
 acceptable representation of the event, and many things can be reduced to their component parts, or even use a more common synonym, or word root.And it much more important that the system would be able to answer the question What did the dog sniff/smell? as opposed to keeping the data exactly the same.As long as the answers come out the same, the internal representation could be in chinese or marks in the sand.James RatcliffMatt Mahoney [EMAIL PROTECTED] wrote: James Ratcliff [EMAIL PROTECTED] wrote:Many of these examples actually arnt hard, if you use some statitisical information and common sense knowledge base.The problem is not
 that these examples are hard, but that are are millions of them. To parse English you have to know that pizzas have pepperoni, that demonstrators advocate violence, that cats chase mice, and so on. There is no neat, tidy algorithm that will generate all of this knowledge. You can't do any better than to just write down all of these facts. The data is not compressable.I said millions, but we really don't know, maybe 10^9 bits. We have a long history of underestimating the complexity of natural language, going back to SHRDLU, Eliza, and the 1959 BASEBALL program, all of  which could parse simple sentences. Cycorp is the only one who actually collected this much common human knowledge in a structured form. They probably did not expect it would take 20 years of manual coding, only to discover you can't build the knowledge base first and then tack on a natural language interface later. Something is still wrong.We
 have many ways to represent knowledge: LISP lists, frame-slot, augmented first order logic, term logic, Bayesian, connectionist, NARS, Novamente, etc. Humans can easily take sentences and convert them into the internal representation of any of these systems. Yet none of these systems has solved the natural language interface problem. Why is this?You can't ignore information theory. A Turing machine can't model another machine with greater Kolmogorov complexity. The brain can't understand itself. We want to build data structures where we can see how knowledge is  represented so we can test and debug our systems. Sorry, information theory doesn't allow it. You can't have your AGI and understand it too. We need to think about opaque representations, systems we can train and test without looking inside, systems that 

Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel

Hi,

About

But a simple example is
ate a pepperoni pizza
 ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza


I feel this discussion of sentence parsing and interpretation is
taking a somewhat misleading direction, by focusing on examples that
are in fact very easy to parse and semantically interpret.

When dealing with realistic sentence parsing, things don't work out so
simply as in The cat ate the mouse.

There is even some complexity of course in the above example

ate a Mexican pizza

as the adjective Mexican could mean in the country of Mexico,
Mexican style, or with a topping composed of pieces of a Mexican
person ;-)

But the above examples, overalll, are so simplistic they cover up the
real problems with commonsense knowledge and language understandng...

Among other complications that arise in practice (some even worse),
there are prepositions.  Let's go back to our prior example of with
...

Humans can unproblematically assign senses to with in the following sentences.

**
I ate the salad with a fork
I ate the salad with a tweezers

I ate the salad with my favorite uncle
I ate the salad with my favorite pepper

I ate the salad with my favorite uncle, said the cannibal
I ate the salad with my favorite pepper, said the salt.

I ate the salad with gusto
I ate the salad with Ragu
I ate the salad with Gusto

I eat steak with red wine, and fish with white wine

I eat fish with beer batter
**

Our intended approach to this problem (preposition disambiguation)
within Novamente is to teach the system groundings for many sentences
of this nature within the AGISim simulation world.  Then, when it sees
a sentence of this nature containing a concept that it hasn't seen in
the simulation world, it must match the concept to ones it has seen in
the simulation world, and make a guess.

For instance, it may have seen and learned to understand

I ate the salad with a fork
I ate the salad with an olive

in the sim world, so that when it sees

I ate the salad with a tweezers

it needs to realize that a tweezers is more like a fork than like an
olive (since it is not edible, and is a tool), and so the sense of
with in this latter sentence is probably like the sense in I ate
the salad with a fork.

[One way for the system to realize the similarity between fork and
tweezers is to use WordNet, in which both are classified as
noun.artifact]

What the sim world grounding gives the AI system is a full
understanding of what the various senses of with actually mean.  For
instance, in I ate the salad with a fork, what we really have is

with_tool( I ate the salad, a fork)

i.e. (in one among many possible notations)

with_tool( A, B)
Inheritance(B, fork)
B := eat(I, salad)
past(B)

and thru interacting and learning in the sim world, the system learns
various relationships related to the predicate with_tool.  Once it
guesses that the I ate the salad with the tweezers is correctly
mapped into

with_tool( A, B)
Inheritance(B, tweezers)
B := eat(I, salad)
past(B)

then it can use its knowledge about with_tool, gained in the sim
world, to reason about the situation described.

For example, it often takes an agent practice to learn to use a tool.
A system that has had some experience with instances of with_tool in
the sim world will know this, and will have learned that the
effectiveness with which B can be used by C as a tool for A may depend
on the amount of experience that C has in using B as a tool,
particularly in using B as a tool in contexts similar to A.

Thus, the system could respond to

I ate the salad with a tweezers

with

Is that difficult?

(knowing that, since eating salad with tweezers is unusual (since e.g.
a Google search reveals few instances of it), it is likely that the
speaker may not have had much practice doing it, so it may be
difficult for the speaker.)

This is just one among very very many examples of probabilistic/fuzzy
commonsense knowledge about preposition senses.  In order to interpret
texts correctly an AGI system needs to have this commonsense
knowledge.  Otherwise, even if it correctly figures out that the
intended meaning is

with_tool( I eat the salad, a tweezers)

it won't be able to draw the commonsensically expected implications from this.

So, how to get all this probabilistic commonsense knowledge (which in
humans is mostly unconscious) into the AGI system?

This is where we are back to the good old alternatives, of
a-- embodied learning
b-- exhaustive education through NLP dialogue in very simple English
c-- exhaustive education through dialogue in some artificial language
like Lojban++
d-- make a big nasty database like Cyc (and try to do a better job)

My bet is that a is the best foundational approach, with some
augmentation by the other approaches.  (Though I don't plan to embark
upon d at all, I am willing to make use of DB's constructed by others.
I note that there is no standard DB of preposition senses, though we
have made one 

RE: Re: [agi] The crux of the problem

2006-11-08 Thread Kevin
 

http://www.physorg.com/news82190531.html

Rabinovich and his colleague at the Institute for Nonlinear Science at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich at the Institute for the Investigation of Optical Communication
at the University of San Luis Potosi in Mexico, present a new model for
understanding decision making. Their paper, titled Dynamics of Sequential
Decision Making, has been published on Physical Review Letters. 


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 08, 2006 11:03 AM
To: agi@v2.listbox.com
Subject: Re: Re: [agi] The crux of the problem

Hi,

About
 But a simple example is
 ate a pepperoni pizza
  ate a tuna pizza
 ate a VEGAN SUPREME pizza
 ate a Mexican pizza
 ate a pineapple pizza

I feel this discussion of sentence parsing and interpretation is taking a
somewhat misleading direction, by focusing on examples that are in fact very
easy to parse and semantically interpret.

When dealing with realistic sentence parsing, things don't work out so
simply as in The cat ate the mouse.

There is even some complexity of course in the above example

ate a Mexican pizza

as the adjective Mexican could mean in the country of Mexico, Mexican
style, or with a topping composed of pieces of a Mexican person ;-)

But the above examples, overalll, are so simplistic they cover up the real
problems with commonsense knowledge and language understandng...

Among other complications that arise in practice (some even worse), there
are prepositions.  Let's go back to our prior example of with
...

Humans can unproblematically assign senses to with in the following
sentences.

**
I ate the salad with a fork
I ate the salad with a tweezers

I ate the salad with my favorite uncle
I ate the salad with my favorite pepper

I ate the salad with my favorite uncle, said the cannibal I ate the salad
with my favorite pepper, said the salt.

I ate the salad with gusto
I ate the salad with Ragu
I ate the salad with Gusto

I eat steak with red wine, and fish with white wine

I eat fish with beer batter
**

Our intended approach to this problem (preposition disambiguation) within
Novamente is to teach the system groundings for many sentences of this
nature within the AGISim simulation world.  Then, when it sees a sentence of
this nature containing a concept that it hasn't seen in the simulation
world, it must match the concept to ones it has seen in the simulation
world, and make a guess.

For instance, it may have seen and learned to understand

I ate the salad with a fork
I ate the salad with an olive

in the sim world, so that when it sees

I ate the salad with a tweezers

it needs to realize that a tweezers is more like a fork than like an olive
(since it is not edible, and is a tool), and so the sense of with in this
latter sentence is probably like the sense in I ate the salad with a fork.

[One way for the system to realize the similarity between fork and tweezers
is to use WordNet, in which both are classified as noun.artifact]

What the sim world grounding gives the AI system is a full understanding of
what the various senses of with actually mean.  For instance, in I ate
the salad with a fork, what we really have is

with_tool( I ate the salad, a fork)

i.e. (in one among many possible notations)

with_tool( A, B)
Inheritance(B, fork)
B := eat(I, salad)
past(B)

and thru interacting and learning in the sim world, the system learns
various relationships related to the predicate with_tool.  Once it guesses
that the I ate the salad with the tweezers is correctly mapped into

with_tool( A, B)
Inheritance(B, tweezers)
B := eat(I, salad)
past(B)

then it can use its knowledge about with_tool, gained in the sim world, to
reason about the situation described.

For example, it often takes an agent practice to learn to use a tool.
A system that has had some experience with instances of with_tool in the sim
world will know this, and will have learned that the effectiveness with
which B can be used by C as a tool for A may depend on the amount of
experience that C has in using B as a tool, particularly in using B as a
tool in contexts similar to A.

Thus, the system could respond to

I ate the salad with a tweezers

with

Is that difficult?

(knowing that, since eating salad with tweezers is unusual (since e.g.
a Google search reveals few instances of it), it is likely that the speaker
may not have had much practice doing it, so it may be difficult for the
speaker.)

This is just one among very very many examples of probabilistic/fuzzy
commonsense knowledge about preposition senses.  In order to interpret texts
correctly an AGI system needs to have this commonsense knowledge.
Otherwise, even if it correctly figures out that the intended meaning is

with_tool( I eat the salad, a tweezers)

it won't be able to draw the commonsensically expected implications from
this.

So, how to get all this probabilistic 

Re: Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
My plan has both A with B and D examplesand Ben: So, I feel much of the present discussion on NLP interpretation isbypassing the hard problem, which is enabling an AGI system to learnthe millions or billions of commonsense (probabilistic) rules relatingto basic relationships like with_tool, which humans learn fromexperienceI hoped to get more into this area with the discussion.That was a really good description below, and I agree with most all of it.Its hard to learn these on a one-by-one basis though Im afraid, without already having in the background that "salad is eaten with a fork, as a tool" in the DB, with confidences, and probabilities.After a few passes with salad / ate / fork the frames should be primed, and
 then when it encounters new information it can use and reason with that. One of the easiest first level differentiation of the Roles of the prepositions is simply the ISA category of the types, Person vs tool or other.It also allows for a lot of missing information assumption, which is really interesting to me.Like, upon getting the sentence "I ate the salad" We would infer that the salad was being eaten most likely with a fork, so we could conditionally assume there is also a form in the current environment, and things like, what si in the salad (lettuce) The thing I want to make sure of there, is that the confidence values, and the probabilites are carried along through all these assumptions, so once it starts assuming something like an elephant in the room, it will realize that that is not probably or likely.One better source than a strait google search Ive found, is a huge set of novels, 600+, and text stats on it, it removes
 much of the nonsense that you find on crud web pages, alternatively, getting it from new sources, is a fairly good way as well, I am using google news and a special engine for that.JamesBen Goertzel [EMAIL PROTECTED] wrote: Hi,About But a simple example is ate a pepperoni pizza  ate a tuna pizza ate a VEGAN SUPREME pizza ate a Mexican pizza ate a pineapple pizzaI feel this discussion of sentence parsing and interpretation istaking a somewhat misleading direction, by focusing on examples thatare in fact very easy to parse and semantically interpret.When dealing with realistic sentence parsing, things don't work out sosimply as in "The cat ate the mouse."There is even some complexity of course in the above example"ate a Mexican
 pizza"as the adjective Mexican could mean "in the country of Mexico","Mexican style", or "with a topping composed of pieces of a Mexicanperson" ;-)But the above examples, overalll, are so simplistic they cover up thereal problems with commonsense knowledge and language understandng...Among other complications that arise in practice (some even worse),there are prepositions.  Let's go back to our prior example of "with"...Humans can unproblematically assign senses to "with" in the following sentences.**I ate the salad with a forkI ate the salad with a tweezersI ate the salad with my favorite uncleI ate the salad with my favorite pepper"I ate the salad with my favorite uncle," said the cannibal"I ate the salad with my favorite pepper," said the salt.I ate the salad with gustoI ate the salad with RaguI ate the salad with GustoI eat steak with red wine,
 and fish with white wineI eat fish with beer batter**Our intended approach to this problem (preposition disambiguation)within Novamente is to teach the system groundings for many sentencesof this nature within the AGISim simulation world.  Then, when it seesa sentence of this nature containing a concept that it hasn't seen inthe simulation world, it must match the concept to ones it has seen inthe simulation world, and make a guess.For instance, it may have seen and learned to understand"I ate the salad with a fork""I ate the salad with an olive"in the sim world, so that when it sees"I ate the salad with a tweezers"it needs to realize that a tweezers is more like a fork than like anolive (since it is not edible, and is a tool), and so the sense of"with" in this latter sentence is probably like the sense in "I atethe salad with a fork."[One way for the system to
 realize the similarity between fork andtweezers is to use WordNet, in which both are classified asnoun.artifact]What the sim world grounding gives the AI system is a fullunderstanding of what the various senses of "with" actually mean.  Forinstance, in "I ate the salad with a fork", what we really have iswith_tool( I ate the salad, a fork)i.e. (in one among many possible notations)with_tool( A, B)Inheritance(B, fork)B := eat(I, salad)past(B)and thru interacting and learning in the sim world, the system learnsvarious relationships related to the predicate with_tool.  Once itguesses that the "I ate the salad with the tweezers" is correctlymapped intowith_tool( A, B)Inheritance(B, tweezers)B := eat(I, salad)past(B)then it can use its knowledge about with_tool, gained in the simworld, to reason about the situation described.For 

Re[3]: [agi] The crux of the problem

2006-11-08 Thread Mark Waser

So, how to get all this probabilistic commonsense knowledge (which in
humans is mostly unconscious) into the AGI system?
a-- embodied learning
b-- exhaustive education through NLP dialogue in very simple English
c-- exhaustive education through dialogue in some artificial language
like Lojban++
d-- make a big nasty database like Cyc (and try to do a better job)


How about e-- Learning from a large text corpus (i.e the web)?

- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 08, 2006 11:02 AM
Subject: **SPAM** Re: Re: [agi] The crux of the problem



Hi,

About

But a simple example is
ate a pepperoni pizza
 ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza


I feel this discussion of sentence parsing and interpretation is
taking a somewhat misleading direction, by focusing on examples that
are in fact very easy to parse and semantically interpret.

When dealing with realistic sentence parsing, things don't work out so
simply as in The cat ate the mouse.

There is even some complexity of course in the above example

ate a Mexican pizza

as the adjective Mexican could mean in the country of Mexico,
Mexican style, or with a topping composed of pieces of a Mexican
person ;-)

But the above examples, overalll, are so simplistic they cover up the
real problems with commonsense knowledge and language understandng...

Among other complications that arise in practice (some even worse),
there are prepositions.  Let's go back to our prior example of with
...

Humans can unproblematically assign senses to with in the following 
sentences.


**
I ate the salad with a fork
I ate the salad with a tweezers

I ate the salad with my favorite uncle
I ate the salad with my favorite pepper

I ate the salad with my favorite uncle, said the cannibal
I ate the salad with my favorite pepper, said the salt.

I ate the salad with gusto
I ate the salad with Ragu
I ate the salad with Gusto

I eat steak with red wine, and fish with white wine

I eat fish with beer batter
**

Our intended approach to this problem (preposition disambiguation)
within Novamente is to teach the system groundings for many sentences
of this nature within the AGISim simulation world.  Then, when it sees
a sentence of this nature containing a concept that it hasn't seen in
the simulation world, it must match the concept to ones it has seen in
the simulation world, and make a guess.

For instance, it may have seen and learned to understand

I ate the salad with a fork
I ate the salad with an olive

in the sim world, so that when it sees

I ate the salad with a tweezers

it needs to realize that a tweezers is more like a fork than like an
olive (since it is not edible, and is a tool), and so the sense of
with in this latter sentence is probably like the sense in I ate
the salad with a fork.

[One way for the system to realize the similarity between fork and
tweezers is to use WordNet, in which both are classified as
noun.artifact]

What the sim world grounding gives the AI system is a full
understanding of what the various senses of with actually mean.  For
instance, in I ate the salad with a fork, what we really have is

with_tool( I ate the salad, a fork)

i.e. (in one among many possible notations)

with_tool( A, B)
Inheritance(B, fork)
B := eat(I, salad)
past(B)

and thru interacting and learning in the sim world, the system learns
various relationships related to the predicate with_tool.  Once it
guesses that the I ate the salad with the tweezers is correctly
mapped into

with_tool( A, B)
Inheritance(B, tweezers)
B := eat(I, salad)
past(B)

then it can use its knowledge about with_tool, gained in the sim
world, to reason about the situation described.

For example, it often takes an agent practice to learn to use a tool.
A system that has had some experience with instances of with_tool in
the sim world will know this, and will have learned that the
effectiveness with which B can be used by C as a tool for A may depend
on the amount of experience that C has in using B as a tool,
particularly in using B as a tool in contexts similar to A.

Thus, the system could respond to

I ate the salad with a tweezers

with

Is that difficult?

(knowing that, since eating salad with tweezers is unusual (since e.g.
a Google search reveals few instances of it), it is likely that the
speaker may not have had much practice doing it, so it may be
difficult for the speaker.)

This is just one among very very many examples of probabilistic/fuzzy
commonsense knowledge about preposition senses.  In order to interpret
texts correctly an AGI system needs to have this commonsense
knowledge.  Otherwise, even if it correctly figures out that the
intended meaning is

with_tool( I eat the salad, a tweezers)

it won't be able to draw the commonsensically expected implications from 
this.


So, how to get all this probabilistic commonsense knowledge (which in
humans is mostly 

Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eric Baum

Ben Jef wrote:
 As I see it, the present key challenge of artificial intelligence
 is to develop a fast and frugal method of finding fast and frugal
 methods,

Ben However, this in itself is not possible.  There can be a fast
Ben method of finding fast and frugal methods, or a frugal method of
Ben finding fast and frugal methods, but not a fast and frugal method
Ben of finding fast and frugal methods ... not in general ...

 in other words to develop an efficient time-bound algorithm for
 recognizing and compressing those regularities in the world
 faster than the original blind methods of natural evolution.

Ben This paragraph introduces the key restriction -- the world,
Ben i.e. the particular class of environments in which the AI is
Ben biased to operate.

As I and Jef and you appear to agree, extant Intelligence works 
because it exploits structure *of our world*;
there is and can be (unless P=NP or some such radical and 
unlikely possibility) no such thing as as General Intelligence 
that works in all worlds.

Ben It is possible to have a fast and frugal method of finding {fast
Ben and frugal methods for operating in environments in class X} ...

Ben [However, there can be no fast and frugal method for producing
Ben such a method based solely on knowledge of the environment X ;-)
Ben ]

I am unsure what you mean by this. Maybe what you are saying is, its not
going to be possible by writing down a simple algorithm and running it
for a week on a PC. This I agree with.

The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and 
1,000,000 PC-years  (i.e a $3Bn budget).
(Why should producing a human-level AI be cheaper than decoding the
genome?) And of course, it has to scale, in the sense that you have to
be able to prove with  $10^7 (preferably  $10^6 ) that the
methodology works (as was the case more or less with the genome.)
This, it seems to me, requires a first project much more limited
than understanding most of English, yet of significant practical 
benefit. I'm wondering if someone has a good proposal.


Ben One of my current sub-projects is trying to precisely formulate
Ben conditions on the environment under which it is the case that
Ben Novamente's particular combination of AI algorithms is fast and
Ben frugal at finding fast and frugal methods for solving
Ben environment-relevant problems   I believe I know how to do
Ben so, but proving my intuitions rigorously will be a bunch of work
Ben which I don't have time for at the moment ... but the task will
Ben go on my (long) queue...

Ben -- Ben

Ben - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben To unsubscribe or change your options, please go to:
Ben http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore

Kevin wrote:
 


http://www.physorg.com/news82190531.html

Rabinovich and his colleague at the Institute for Nonlinear Science at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich at the Institute for the Investigation of Optical Communication
at the University of San Luis Potosi in Mexico, present a new model for
understanding decision making. Their paper, titled Dynamics of Sequential
Decision Making, has been published on Physical Review Letters. 


From that same article:

Rabinovich explains that a sequential approach is needed: an approach 
that combines dynamic and probabilistic steps. And that, he says, is 
precisely what he, Huerta and Afraimovich are proposing. This is a new 
class of model, he says. We have found a window to consciousness, and 
now we can generalize this into other cognitive functions.


Sigh!  I haven't read the article, but this looks like amateur nonsense. 
 Found a window to consciousness eh?


Physicists sometimes get this disease that makes them think they can 
solve the problem of understanding minds, using just a handful of 
equations, and without bothering to read the psychology/philosophy/AI 
literature. Telltale sign of this?  After talking about something else 
they suddenly let slip that this is also about consciousness.


(I can be snarky about 'em because I used to be a physicist once, and 
started out thinking the same way.  Only difference is, I did some hard 
work to read the literature and got over it).



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Ben Goertzel

Eric wrote:

The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and
1,000,000 PC-years  (i.e a $3Bn budget).
(Why should producing a human-level AI be cheaper than decoding the
genome?) And of course, it has to scale, in the sense that you have to
be able to prove with  $10^7 (preferably  $10^6 ) that the
methodology works (as was the case more or less with the genome.)
This, it seems to me, requires a first project much more limited
than understanding most of English, yet of significant practical
benefit. I'm wondering if someone has a good proposal.


I am afraid that it may not be possible to find an initial project that is both

* small
* clearly a meaningfully large step along the path to AGI
* of significant practical benefit

My observation is that for nearly all practical tasks, either

a) it is a fairly large amount of work to get them done within an AGI
architectre

or

b) narrow-AI methods can do them pretty well with a much smaller
amount of work than it would take to do them within an AGI
architecture

I suspect there are fundamental reasons for this, even though current
computer science and AI theory doesn't let us articulate these reasons
clearly, at this stage.

So, I think that, in terms of proving the value of AGI research, we
wll likely have to settle for a combination of:

a) an interim task that is relatively small, and is clearly along the
path to AGI, and is impressive in itself but is not necessarily of
large practical benefit unto itself.

b) interim tasks that are of practical value, and utilize AGI-related
ideas, but may also be achievable (with different strengths and
weaknesses) using narrow-AI methods

As an example of a, I suggest robustly learning to carry out a number
of Piagetan concrete-operational level tasks in a simulation world.

As an example of b, I suggest natural language question answering in a
limited domain.

Alternate suggestions of tasks are solicited and much valued ... any
suggestions??  ;-)

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Jef Allbright
Eric Baum wrote: 

 As I and Jef and you appear to agree, extant Intelligence 
 works because it exploits structure *of our world*; there is 
 and can be (unless P=NP or some such radical and unlikely 
 possibility) no such thing as as General Intelligence that 
 works in all worlds.

I'm going to risk being misunderstood again over a subtle point of
clarification:

I think we are in practical agreement on the point quoted above, but I
think that a more coherent view would avoid the binary distinction and
instead place general intelligence at the end of a scale where with
diminishing exploitation of regularities in the environment
computational requirements become increasingly intractable.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel

About


 http://www.physorg.com/news82190531.html

 Rabinovich and his colleague at the Institute for Nonlinear Science at the
 University of California, San Diego, Ramon Huerta, along with Valentin
 Afraimovich at the Institute for the Investigation of Optical Communication
 at the University of San Luis Potosi in Mexico, present a new model for
 understanding decision making. Their paper, titled Dynamics of Sequential
 Decision Making, has been published on Physical Review Letters.


I did not read the paper (as it is not available free online) but at this page

http://inls.ucsd.edu/wlc_locust.html

I found a powerpoint that illustrates the main idea, and points to a
number of related papers with abstracts online.

I think the basic idea they describe is probably sound, although not
as new as they suggest, and not deserving of the hype used to describe
it.

There has long been a question regarding the best way to apply
nonlinear dynamics concepts to the brain.  At first, naively, it was
proposed that ideas, memories, perceptions, etc. should be viewed as
**attractors** of neurodynamics -- but eventually folks realized that
the brain rarely settles into attractors in the dynamical systems
theory sense, so that (much as in the dynamical systems theory
analysis of financial time series, for example) one is generally
dealing with complex dynamics of **transients** rather than
attractors.

What Rabinovich et al are saying is, in essence, that neural systems
are characterized by complex attractors with multiple lobes, and
that decision-making often boils down (at the neurodynamic level) to
the neural net state switching between one lobe or another of the
multi-lobe attractor -- during the transient period while the
dynamics is converging to the attractor (but note that in a
real-life neurodynamic context, convergence basically never happens
because of new stimuli coming into the brain and perturbing the state
in another direction...).

So, then you can look at a probabilistic model of switching between
lobes of the attractor -- a kind of Markov chain model where the
states are attractor-lobes and (in the simplest case) there is a
matrix of transition probabilities between attractor-lobes.  And in
general this will be a Markov process with history.

The notion of attractors with lobes is loosely related to the notion
of hyperchaos

http://www.scholarpedia.org/article/Hyperchaos

which treats attractors with multiple Liapunov exponents.

I wrote about this sort of thing in 1994 in the book Chaotic Logic,
see e.g. section 2.3.2 at the end of

http://www.goertzel.org/books/logic/chapter_two.htm

Rabinovich et al are getting at a similar idea, but in extremely
simple contexts (e.g. behavior of insects and mollusks)

Is this a window on consciousness, as Rabinovich describes?  Well,
sure it is, in a sense that consciousness is related to making
decisions, and it's interesting to observe how decision-making may
resolve itself in terms of complex nonlinear neurodynamics.  But this
kind of observation (about the neurodynamic correlates of
decision-making experience) is of course only a small part of the
picture regarding consciousness, or decision-making, or intelligence,
etc.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore


Back in 1987, during my M.Sc., I invented the term 'dynamic relaxation'
to describe a quasi-neural system whose dynamics were governed by 
multiple relaxation targets that are changing all the time.  So the idea 
of having a multi-lobe attractor, or structured, time-varying 
attractors, is not by itself a big deal (you can get that sort of 
characteristic, I would suspect, in an unlimited number of different 
ways, with both simple NNs and with the more exotic types I was 
considering), what matters is what you do with it.


What Rabinovich et al appear to do is to buy some mathematical 
tractability by applying their idea to a trivially simple neural model. 
 That means they know a lot of detail about a model that, if used for 
anything realistic (like building an intelligence) would *then* beg so 
many questions that  well, it would beg virtually *all* the 
questions about what it takes to be intelligent.  (I.e. Show me a great 
supervised NN and I will show you a very big list of begged questions 
about the intelligent functions that have to be in the supervisor).


In other words, what they have done with it is something essentially 
useless.  I say that purely because of all those begged questions.



Richard Loosemore.




Ben Goertzel wrote:

About


 http://www.physorg.com/news82190531.html

 Rabinovich and his colleague at the Institute for Nonlinear Science 
at the

 University of California, San Diego, Ramon Huerta, along with Valentin
 Afraimovich at the Institute for the Investigation of Optical 
Communication

 at the University of San Luis Potosi in Mexico, present a new model for
 understanding decision making. Their paper, titled Dynamics of 
Sequential

 Decision Making, has been published on Physical Review Letters.


I did not read the paper (as it is not available free online) but at 
this page


http://inls.ucsd.edu/wlc_locust.html

I found a powerpoint that illustrates the main idea, and points to a
number of related papers with abstracts online.

I think the basic idea they describe is probably sound, although not
as new as they suggest, and not deserving of the hype used to describe
it.

There has long been a question regarding the best way to apply
nonlinear dynamics concepts to the brain.  At first, naively, it was
proposed that ideas, memories, perceptions, etc. should be viewed as
**attractors** of neurodynamics -- but eventually folks realized that
the brain rarely settles into attractors in the dynamical systems
theory sense, so that (much as in the dynamical systems theory
analysis of financial time series, for example) one is generally
dealing with complex dynamics of **transients** rather than
attractors.

What Rabinovich et al are saying is, in essence, that neural systems
are characterized by complex attractors with multiple lobes, and
that decision-making often boils down (at the neurodynamic level) to
the neural net state switching between one lobe or another of the
multi-lobe attractor -- during the transient period while the
dynamics is converging to the attractor (but note that in a
real-life neurodynamic context, convergence basically never happens
because of new stimuli coming into the brain and perturbing the state
in another direction...).

So, then you can look at a probabilistic model of switching between
lobes of the attractor -- a kind of Markov chain model where the
states are attractor-lobes and (in the simplest case) there is a
matrix of transition probabilities between attractor-lobes.  And in
general this will be a Markov process with history.

The notion of attractors with lobes is loosely related to the notion
of hyperchaos

http://www.scholarpedia.org/article/Hyperchaos

which treats attractors with multiple Liapunov exponents.

I wrote about this sort of thing in 1994 in the book Chaotic Logic,
see e.g. section 2.3.2 at the end of

http://www.goertzel.org/books/logic/chapter_two.htm

Rabinovich et al are getting at a similar idea, but in extremely
simple contexts (e.g. behavior of insects and mollusks)

Is this a window on consciousness, as Rabinovich describes?  Well,
sure it is, in a sense that consciousness is related to making
decisions, and it's interesting to observe how decision-making may
resolve itself in terms of complex nonlinear neurodynamics.  But this
kind of observation (about the neurodynamic correlates of
decision-making experience) is of course only a small part of the
picture regarding consciousness, or decision-making, or intelligence,
etc.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel

Richard wrote:

What Rabinovich et al appear to do is to buy some mathematical
tractability by applying their idea to a trivially simple neural model.
  That means they know a lot of detail about a model that, if used for
anything realistic (like building an intelligence) would *then* beg so
many questions that  well, it would beg virtually *all* the
questions about what it takes to be intelligent.  (I.e. Show me a great
supervised NN and I will show you a very big list of begged questions
about the intelligent functions that have to be in the supervisor).

In other words, what they have done with it is something essentially
useless.  I say that purely because of all those begged questions.


According to their papers, their detailed approach seems to be useful
for modeling some behaviors of insects and mollusks.

This seems about right, given the simplicity of their equations...

I agree that their detailed approach can probably not be scaled up to
deal with more complex brains.  For the decision a mollusk makes as to
whether to snap its shell shut or not, you can write a simple
equation, and this may indeed be interesting as neurophysiological
math modeling ... but the comparable equations for the human brain,
even if they display the same generic phenomena of hyperchaos and
multi-lobed strange attractors and so forth, would be totally
infeasible to write down in any detail ... so the approach of detailed
differential-equations-modeling is inapplicable in the cognitively
interesting cases...

Physics-wise, this is not so philosophically different from the
inability of equational approaches that succeed for the 2-body problem
to scale up to the n-body problem...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] An AGI baby

2006-11-08 Thread Bob Mottram
I don't know if the Novamente baby is going to be anything like a human baby. If it is, this article might be of interest. Design methodologies for central pattern generators: an application to crawling humanoids
 http://birg2.epfl.ch/publications/fulltext/righetti06d.pdfAlso for some more cognitive baby stuff see 
http://www.psyk.uu.se/hemsidor/spadbarnslabbet/index_e.html- Bob

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky

Eric Baum wrote:

(Why should producing a human-level AI be cheaper than decoding the
genome?)


Because the genome is encrypted even worse than natural language.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eric Baum

Eliezer Eric Baum wrote:
 (Why should producing a human-level AI be cheaper than decoding the
 genome?)

Eliezer Because the genome is encrypted even worse than natural
Eliezer language.

(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), which didn't involve any
decryption at all.

(b) why do you think so? 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
Ben Goertzel [EMAIL PROTECTED] wrote:
I am afraid that it may not be possible to find an initial project that is both

* small
* clearly a meaningfully large step along the path to AGI
* of significant practical benefit

I'm afraid you're right.  It is especially difficult because there is a long 
history of small (i.e narrow AI) projects that appear superficially to be 
meaningful steps toward AGI.  Sometimes it is decades before we discover that 
they don't scale.
 
-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
I think that natural language and the human genome have about the same order of 
magnitude complexity.

The genome is 6 x 10^9 bits (2 bits per base pair) uncompressed, but there is a 
lot of noncoding DNA and some redundancy.  By decoding, I assume you mean 
building a model and understanding the genome to the point where you could 
modify it and predict what will happen.

The complexity of natural language is probably 10^9 bits.  This is supported by:
- Turing's 1950 estimate, which he did not explain.
- Landauer's estimate of human long term memory capacity.
- The quantity of language processed by an average adult, times Shannon's 
estimate of the entropy of written English of 1 bit per character.
- Extrapolating the relationship between language model training set size and 
compression ratio in this graph: http://cs.fit.edu/~mmahoney/dissertation/

I don't think the encryption of the genome is any worse.  Complex systems (that 
have high Kolmogorov complexity, are incrementally updatable, and do useful 
computation) tend to converge to the boundary between stability and chaos, 
where some perturbations decay while others grow.  A characteristic of such 
systems (as studied by Kaufmann) is that the number of stable states or 
attractors tends to the square root of the size.  The number of human genes is 
about the same as the size of the human vocabulary, about 30,000.  Neither 
system is encrypted in the mathematical sense.  Encryption cannot be an 
emergent property because it is at the extreme chaotic end of the spectrum.  
Changing one bit of the key or plaintext affects every bit of the ciphertext.

The difference is that it is easier (faster and more ethical) to experiment 
with language models than the human genome.

 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 8, 2006 3:23:10 PM
Subject: Re: [agi] Natural versus formal AI interface languages

Eric Baum wrote:
 (Why should producing a human-level AI be cheaper than decoding the
 genome?)

Because the genome is encrypted even worse than natural language.

-- 
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky

Eric Baum wrote:

Eliezer Eric Baum wrote:


(Why should producing a human-level AI be cheaper than decoding the
genome?)


Eliezer Because the genome is encrypted even worse than natural
Eliezer language.

(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), which didn't involve any
decryption at all.

(b) why do you think so? 


(a) Sorry, didn't pick up on that.  Possibly, more money has already 
been spent on failed AGI projects than on the human genome.


(b) Relative to an AI built by aliens, it's possible that the human 
proteome annotated by the corresponding selection pressures (= the 
decrypted genome), is easier to reverse-engineer than the causal graph 
of human language.  Human language, after all, takes place in the 
context of a complicated human mind.  But relative to humans, human 
language is certainly a lot easier for us to understand than the human 
proteome!


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The crux of the problem

2006-11-08 Thread Matt Mahoney
James,Many of the solutions you describe can use information gathered from statistical models, which are opaque. I need to elaborate on this, because I think opaque models will be fundamental to solving AGI. We need to build models in a way that doesn't require access to the internals. This requires a different approach than traditional knowledge representation. It will require black box testing and performance metrics. It will be less of an engineering approach, and more of an experimental one.Information retrieval is a good example. It is really simple. You type a question, and the system matches the words in your query to words in the document and ranks the documents by TF*IDF (term frequency times log inverse document frequency). This is an opaque
 model. We normally build an index, but this is really just an optimization. The language model is just the documents themselves. There is no good theory to explain why it works. It just does.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday, November 8, 2006 10:14:43 AMSubject: Re: [agi] The crux of the problemMatt: To parse English you have to know that pizzas have pepperoni, that demonstrators advocate violence, that cats chase mice, and so on. There is no neat, tidy algorithm that will generate all of this knowledge. You can't do any better than to just write down all of these facts. The data is not
 compressable.James: You CAN actually, simply because there is patterns, anytime there are patterns, there is regularity, and the ability to compress things. And those things are limited, even if on a super-large scale. The problem with that is the irregular parts, which have to be handled, and the amount of bad data, which has to be handled.But a simple example isate a pepperoni pizza ate a tuna pizzaate a VEGAN SUPREME pizzaate a Mexican pizzaate a pineapple pizzaAnd
 we can see right off, that these are different types of pizza topping, and we can compress that into a frame easilyFrame Pizza: can have Toppings: pepperoni, tuna, pineapple can be Type: vegan supreme, mexicanThis does take some work, and does require some good data, but can be done.We can take that further to gather probabilities, and confidences about the Pizza frame, such that we can determine that a pepperoni pizza is the most likely if a random pizza is ordered.This does not give a perfect collection of information, but alot can be garnered just from this. This does not solve the AI problem, but does give us a nice building block of Knowledge to start working with. This is a much preferred method than hand-coding each piece as Cyc has seen, and they are currently coding and using many algorithms now that take advantage of statistical NLP and google to assist and suggest answers, and check the
 answers they have in place.There is a simple pattern between Nouns and Verbs as well that can be taken out and extracted with relative ease, and also between Adj and Nouns, and Adv and Verbs.Ex: The dog eats, barks, growls, sniffs, attacks, alerts.That gives us an initial store of information about a dog frame.Then if given Rover barked at the mailmen. we can programmatically narrow the possibilities about what Actor can fulfill the "bark" role, and see that dogs bark, and are most likely to bark at the mailman, and give a probability, and confidence.One problem I have with you task of text compression is the stricture that it retain exactly the same text, as opposed to exactly the same Information.For a computer science data transmission issue the first is important, but for an AI issue the latter is more important.The dog sniffed the shoes. and The dog smelled the shoes. Is so very close in meaning as to be
 acceptable representation of the event, and many things can be reduced to their component parts, or even use a more common synonym, or word root.And it much more important that the system would be able to answer the question What did the dog sniff/smell? as opposed to keeping the data exactly the same.As long as the answers come out the same, the internal representation could be in chinese or marks in the sand.James Ratcliff
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread John Scanlon
Fully decoding the human genome is almost impossible.  Not only is there the 
problem of protein folding, which I think even supercomputers can't fully 
solve, but the purpose for the structure of each protein depends on 
interaction with the incredibly complex molecular structures inside cells. 
Also, the genetic code for a human being is basically made of the same 
elements that the genetic code for the lowliest single-celled creature is 
made of, and yet it somehow describes the initial structure of a system of 
neural cells that then developes into a human brain through a process of 
embriological growth (which includes biological interaction from the 
mother -- why you can't just grow a human being from an embryo in a petri 
dish), and then a fairly long process of childhood development.


This is the way evolution created mind somewhat randomly over three billion 
(and a half?) years.  The human mind is the pinnacle of this evolution. 
With this mind along with collective intelligence, it shouldn't take another 
three billion years to engineer intelligence.  Evolution is slow -- human 
beings can engineer.



- Original Message - 
Eliezer S. Yudkowsky wrote:



Eric Baum wrote:

(Why should producing a human-level AI be cheaper than decoding the
genome?)


Because the genome is encrypted even worse than natural language.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Out of Office AutoReply: Mail System Error - Returned Mail

2006-11-08 Thread Susan M. Prater
I will be out of the office until November 15.  Please call 501-682-1115 if you 
need immediate assistance.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303