RE: [agi] WordNet and NARS

2004-02-04 Thread kevinc









Ben said:



 However, we need to
remember that the knowledge in an AGI should be *experientially
grounded*.

 . . . but it needs to turn this
knowledge into knowledge by crosslinking a decent fraction of it
with 

 perceptual and procedural patterns . .
.



Can a color-blind man understand yellow? Perhaps not in the same way a normal person can. But he could easily know more about yellow
than many. Its wavelength, its
history of use in fine arts, its psychological impact, and so on. He could even effectively use yellow in
graphics, perhaps with a tool to identify yellow with a special texture. 



So, even though the color-blind (or an AI entity) never actually sees
yellow, he can experience yellow by way of external knowledge. Perhaps the limit to this grounding by
knowledge phenomenon is very high. Maybe as Ben says, the grounding can be by procedural
patterns. WordNet type knowledge
(implemented in a system such as NARS) could be a link to human knowledge.



A yellow filter in the sights of a target rifle makes the target-sight
image more distinct in low light. While
I have never experienced it myself, the book in which I found this information is
a standard reference for Olympic caliber competitors. So as a NARS based intelligence, I give this belief f and c
values of .99 :-) 



Kevin Copple 








To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]






Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
On Wed, 21 Jan 2004, Eric Baum wrote:

 New Book:

 What is Thought?
 Eric B. Baum

What a great book.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Philip Sutton



Thanks Bill for the Eric Baum reference.


Deep thinker that I am, I've just read the book review on Amazon and 
that has orientated me to some of the key ideas in the book (I hope!) so 
I'm happy to start speculating without having actually read the book.


(See the review below.)


It seems that Baum is arguing that biological minds are amazingly quick 
at making sense of the world because, as a result of evolution, the 
structure of the brain is set up with inbuilt limitations/assumptions based 
on likely possibilities in the real world - thus cutting out vast areas for 
speculative but ultimately fruitless computation - but presumably limiting 
biological minds' ability to understand phenomena that go beyond 
common sense that has been structurally summarised by evolved 
shortcuts. (That must be why non-Newtonian phisics always makes my 
brain hurt!)


I'm sure that most people on the list who are heavily into developing 
AGIs will have traversed this ground before. But I wondered..


(By the waywhat follows is most likely not of any interest to people 
well versed in this issue..what I'm doing is feeding back to the list my 
understanding of this issue in the hope that somebody who knows all this 
stuff can can tell me if I'm on the right track...so I'm really hoping I can 
learn something from both my own cogitations and from the feedback 
others can offer someone still very much in the AGI sandbox.)


So here we go..On the face of it, any AGI that is not designed with all 
these short cuts and assumptions in place has a huge amount of 
catching up to do to develop (or learn) efficient rules of thumb 
(heuristics?). Given the flexibility of AGIs and their advantages of 
computation speed and accuracy, the 4000 million years of evolutionary 
learning could perhaps be recapitulated in rather less time. But how 
much less? Would it only take I million years? 100,000 years, 100 
years? I'm sure, Ben that you won't want to be sitting around traiing a 
baby Novamente for that long.


Perhaps AGI's need to be structured so that their minds can do two 
things:

- absorb rules of thumb from observations of other players in the world
 around them (like children picking up ways of thinking from grown ups
 around them) or utilise rules of thumb that are donated to it via
 data dumps. 

- be prepared to and be capable of challenging absorbed rules of thumb
 and be able to revert to a systematic, relatively unbiased
 exploration of an issue when rules of thumb turn up anomalous results
 or when the AGI simply feels curious to go beyond the current rules
 of thumb 

Maybe all the databases of common sense relationships that Cyc is 
developing and the Wordnet database etc. can be considered to be huge 
sets of inherited rules of thumb ie. they are not derived from the 
experience of the AGI.


The biggest problem for an AGI starting to learn seems to me to be able 
to simply get to first base whereby the AGI can make *any* sense of its 
basic sensory input. It seems to me that this is the AGIs hardest task if it 
doesn't have any built in rules of thumb to orientate it.


Maybe an AGI does have to see the world through the lens of inherited 
rules of thumb in it's first hours and even years in order to boost it's 
competence at interpreting the world around it and then it can go about 
replacing inherited rules of thumb with its own grounded self-generated 
rules of thumb?


Maybe it needs to have an inbuilt program a bit like an optical character 
recognition program that takes each class of incoming data and sifts it 
into pre-recognised categies of data - ie. patterns can be letters, 
numbers, colours, shapes, spacial orientation (up, down, left right, 
forward, back etc,). Once the AGI is used to dealing with these preset 
categories it could be fed more abiguous data where it has to perhaps 
invent new categories of its own.


Presumably this is all very obvious, but from comments Ben has made 
over a fair length of time, it seems he's very reluctant to fill an AGI's 
head full of downloaded data/rules of thum or whatever. Ben, the 
language you use suggests that you'd be happy to start with none of this 
downloaded stuff. But it seems to me that an new Novamente would 
struggle really badly, perhaps floundering endlessly in its effort to 
interpret incoming data unless it's primed to make some good guesses 
and to have some preset notions of what to do with this incoming data.


It seems to me that a new-born Novamente needs to be able to use lots 
of preset rules related to its first learning environment so that of the data 
coming in, a very large amount of it already makes sense at some level 
so that the AGI can apply most of it's brain power to resolving a few very 
simple ambiguities - like we do when solving a jigsaw puzzle. It seems 
to me the key learning experience comes from successfully mastering 
these very minor areas of ambiguity thus starting to build up some 
personally 

RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel




I 
agree that not all knowledge in a mind needs to be grounded. 


However, I think that a mind needs to have a LOT of grounded knowledge, 
in order to learn to reason usefully. It can then transfer some of the 
thinking-ability (and some of the concrete relationships) learned on the 
grounded domains, to help it think about its ungrounded 
knowledge...

ben 
g

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of 
  [EMAIL PROTECTED]Sent: Wednesday, February 04, 2004 3:23 
  AMTo: [EMAIL PROTECTED]Subject: RE: [agi] WordNet and 
  NARS
  
  Ben 
  said:
  
   However, we need to 
  remember that the knowledge in an AGI should be *experientially 
  grounded*.
   . . . but it 
  needs to turn this "knowledge" into knowledge by crosslinking a decent 
  fraction of it with 
   perceptual and 
  procedural patterns . . .
  
  Can a 
  color-blind man understand “yellow?” 
  Perhaps not in the same way a normal person can. But he could easily know more about 
  yellow than many. Its wavelength, 
  its history of use in fine arts, its psychological impact, and so on. He could even effectively use yellow 
  in graphics, perhaps with a tool to identify yellow with a special 
  texture. 
  
  So, 
  even though the color-blind (or an AI entity) never actually “sees” yellow, he 
  can “experience” yellow by way of external knowledge. Perhaps the limit to this “grounding 
  by knowledge” phenomenon is very high. Maybe as Ben says, the grounding can be 
  by “procedural patterns.” WordNet 
  type knowledge (implemented in a system such as NARS) could be a link to human 
  knowledge.
  
  A 
  yellow filter in the sights of a target rifle makes the target-sight image 
  more distinct in low light. While 
  I have never experienced it myself, the book in which I found this information 
  is a standard reference for Olympic caliber competitors. So as a NARS based intelligence, I 
  give this belief “f” and “c” values of .99 :-) 
  
  Kevin 
  Copple 
  
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




RE: [agi] What is Thought? Book announcement

2004-02-04 Thread Ben Goertzel




Philip,

I have 
mixed feelings on this issue (filling an AI mind with knowledge from 
DB's).

I'd 
prefer to start with a tabula rasa AI and have it learn everything via 
sensorimotor experience -- and only LATER experiment with feeding DB knowledge 
directly into its knowledge-store 

However, for our current software-application work with our partly-done 
Novamente, we are in fact loading a bunch of knowledge into the Novamente 
software system and doing Novamente-based inference on it using special 
narrow-AI-ish control schemata.

So it seems likely that -- once the full Novamente AGI design is 
implemented and we *finally* start with experiential learning 
experiments-- we will initially experiment with a Novamente that has 
pre-loaded knowledge in its brain, and draws on this knowledge as appropriate in 
the course of its experiential learning. If the pre-loaded knowledge winds 
up not helping, due to its ungrounded nature, then we'll revert to the tabula 
rasa approach

-- Ben 
G




  Presumably this 
  is all very obvious, but from comments Ben has made over a fair length of 
  time, it seems he's very reluctant to fill an AGI's head full of downloaded 
  data/rules of thum or whatever. Ben, the language you use suggests that 
  you'd be happy to start with none of this downloaded stuff. But it seems 
  to me that an new Novamente would struggle really badly, perhaps floundering 
  endlessly in its effort to interpret incoming data unless it's primed to make 
  some good guesses and to have some preset notions of what to do with this 
  incoming data.
  
  It seems to me 
  that a new-born Novamente needs to be able to use lots of preset rules related 
  to its first learning environment so that of the data coming in, a very large 
  amount of it already makes sense at some level so that the AGI can apply most 
  of it's brain power to resolving a few very simple ambiguities - like we do 
  when solving a jigsaw puzzle. It seems to me the key learning experience 
  comes from successfully mastering these very minor areas of ambiguity thus 
  starting to build up some personally grounded understanding - which can be 
  added to (exponentially?) as the AGI retests the validity of its understanding 
  based on inherited rules of thumb and as it builds a more and more complex 
  picture of what's around it - at each level gaining mastery through resolving 
  minor ambiguities at the new level of understanding.
  
  If this model 
  is right then perhaps it shouldn't matter if the AGI has been given a 
  humungous pile of downloaded data/rules of thumb? It would just call on 
  data in the databanks as these seem to be have some useful connection to the 
  data/rules of thumb that the AGI has mastered. Initially the AGI would 
  understand so little that virtually all of the data in it storages would be 
  just so much noise. It would only be able to work it's way into the data 
  as it mastered some initial concepts and concept labels. So in that 
  sense an infant AGI wouldn't be burdened with having too much downloaded 
  ungrounded data - because most of that data would be efectively invisible to 
  it. Isn't this pretty much like a child that has grown up in a house 
  with a huge library, the contents of which only make sense very slowly as the 
  child builds level after level and area after area of base 
  knowledge?
  
  Anyway enough 
  for now. If anyone has time for a babe in the sand box I'd love to know 
  what you think of these musings!
  
  Cheers, 
  Philip
  
  ---
  
  (source)
  What Is 
  Thought?
  by Eric B. Baum 
  (Author)
  Publisher: MIT 
  Press; (January 1, 2004)
  ISBN: 
  0262025485
  
  Review: In What 
  Is Thought? Eric Baum proposes a computational explanation of thought. Just as 
  Erwin Schr?ger in his classic 1944 work What Is Life? argued ten years before 
  the discovery of DNA that life must be explainable at a fundamental level by 
  physics and chemistry, Baum contends that the present-day inability of 
  computer science to explain thought and meaning is no reason to doubt there 
  can be such an explanation. Baum argues that the complexity of mind is the 
  outcome of evolution, which has built thought processes that act unlike the 
  standard algorithms of computer science and that to understand the mind we 
  need to understand these thought processes and the evolutionary process that 
  produced them in computational terms. Baum proposes that underlying mind is a 
  complex but compact program that corresponds to the underlying structure of 
  the world. He argues further that the mind is essentially programmed by DNA. 
  We learn more rapidly than computer scientists have so far been able to 
  explain because the DNA code has programmed the mind to deal only with 
  meaningful possibilities. Thus the mind understands by exploiting semantics, 
  or meaning, for the purposes of computation; constraints are built in so that 
  

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
 It seems that Baum is arguing that biological minds are amazingly quick
 at making sense of the world because, as a result of evolution, the
 structure of the brain is set up with inbuilt limitations/assumptions
 based on likely possibilities in the real world - thus cutting out vast
 areas for speculative but ultimately fruitless computation - but
 presumably limiting biological minds' ability to understand phenomena
 that go beyond common sense that has been structurally summarised by
 evolved shortcuts.
 . . .
 Maybe all the databases of common sense relationships that Cyc is
 developing and the Wordnet database etc. can be considered to be huge
 sets of inherited rules of thumb ie.
 . . .

Before this discussion gets too far off track, Baum's book
talks about inductive biases that are built into human and
animal learning by evolution, not the sort of specific
knowledge that is coded in Cyc. In fact, Baum discusses
Cyc and is pessimistic about its prospects. As examples of
inductive biases Baum discusses 3-D structure, causality,
language, and cheating detection, with many specific
examples from human and animal behavior. He doesn't say
we are born knowing about these things, but that our brains
are primed for learning about them. He sees reinforcement
learning as fundamental to brains, and discusses his Hayek
experiments. He also sees evolution, culture and individual
brains as three interacting levels of learning.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel

Philip,

I think it's important for a mind to master SOME domain (preferably more
than one), because advanced and highly effective cognitive schemata are only
going to be learned in domains that have been mastered.  These cognitive
schemata can then be applied in other domains as well, which are understood
only to a lesser degree of mastery.

And, as you say, in order for the AI to master some domain, it needs a lot
of grounded knowledge in that domain.

So, I am skeptical that an AI can really think effectively in ANY domain
unless it has done a lot of learning based on grounded knowledge in SOME
domain first; because I think advanced cognitive schemata will evolve only
through learning based on grounded knowledge...

-- Ben

 So the way you describe things seems to fit the domain where an AGI
 is trying to build mastery but I'm not convinced that the AGI absolutely
 needs a high level of grounded knowledge in areas where it is not
 building mastery.

 But in areas where the AGI is not building or better still has not
 achieved mastery it should exercise humility and caution and not make
 any rash decisions that could affect others - because it really doesn't
 know how sensible its inherited knowledge is.  This seems to me to be
 an area where ethics intersects with mind dvelopment and the use of
 mind.

 Cheers, Philip

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Simulation and cognition

2004-02-04 Thread Ben Goertzel

Philip,

You and I have chatted a bit about the role of simulation in cognition, in
the past.  I recently had a dialogue on this topic with a colleague (Debbie
Duong), which I think was somewhat clarifying.  Attached is a message I
recently sent to her on the topic.

-- ben



Debbie,

Let's say that a mind observes a bunch of patterns in a system S: P1,
P2,...,Pn.

Then, suppose the mind wants to predict the degree to which a new pattern,
P(n+1), will occur in the system S.

There are at least two approaches it can take:

1) reverse engineer a simulation S' of the system, with the property that
if the simulation S' runs, it will display patterns P1, P2, ..., Pn.  There
are many possible simulations S' that will display these patterns, so you
pick the simplest one you can find in a reasonable amount of effort.

2) Do probabilistic reasoning based on background knowledge, to derive the
probability that P(n+1) will occur, conditional on the occurence of
P1,...,Pn

My contention is that process 2 (inference) is the default one, with process
1 (simulation) followed only in cases where

a) fully understanding the system S is very important to the mind, so that
it's worth spending the large amount of effort required to build a
simulation of it [inference being much computationally cheaper]

b) the system S is very similar to systems that have previously been
modeled, so that building a simulation model of S can quickly be done by
analogy

About the simulation process.  Debbie, you call this process simulation;
in the Novamente design it's called predicate-driven schema learning, the
simulation S' being the a SchemaNode and the conjunction P1  P2  ...  Pn
being a PredicateNode.

We plan to do this simulation-learning using two methods

* combinator-BOA, where both the predicate and schema are represented as
CombinatorTrees.

* analogical inference, modifying existing simulation models to deal with
new contexts, as in case b) above

If we have a disagreement, perhaps it is just about the relative frequency
of processes 1 and 2 in the mind.  You seem to think 1 is more frequent
whereas I seem to think 2 is much more frequent.  I think we both agree that
both processes exist.

I think that our reasoning about other peoples' actions is generally a mix
of 1 and 2.  We are much better at simulating other humans than we are at
simulating nearly anything else, because we essentially re-use the wiring
used to control *ourselves*, in order to simulate others.

This re-use of self-wiring for simulation-of-others, as Eliezer Yudkowsky
has pointed out, may be largely responsible for the feeling of empathy we
get sometimes (i.e., if you're using your self-wiring to simulate someone
else, and you simulate someone else's emotions, then due to the use of your
self-wiring you're gonna end up feeling their (simulated) emotions to some
extent... presto! empathy...).




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben,

 So, I am skeptical that an AI can really think effectively in ANY
 domain unless it has done a lot of learning based on grounded
 knowledge in SOME domain first; because I think advanced cognitive
 schemata will evolve only through learning based on grounded
 knowledge... 

OK. I think we're getting close to agreement on most of this except 
what could be the key starting point.

My intuition is that, if an AGI is to avoid (an admittedly accelerated) 
recapitulation of 3500 billion year evolution of functioning mind, it will 
have to start thinking *first* in one domain using inherited rules of 
thumb for interpreting data (and it might help to download some initial 
ungrounded data that otherwise would have had to be accumulated 
through exposure to its surroundings).  Once the infant AGI has some 
competence using these implanted rules of thumb it can then go 
through the job of building it's own grounded rules of thumb for data 
intepretation and substituting them for the rules of thumb provided at 
the outset by its creators/trainers.

So my guess is that the fastest (and still effective) path to learning 
would be:
-   *first* a partially grounded experience 
-   *then* a fully grounded mastery 
-   then a mixed learning strategy of grounded and non-grounded as need
and oportunity dictates 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel

 So my guess is that the fastest (and still effective) path to learning
 would be:
 -   *first* a partially grounded experience
 -   *then* a fully grounded mastery
 -   then a mixed learning strategy of grounded and non-grounded as need
 and oportunity dictates

 Cheers, Philip

Well, this appears to be the order we're going to do for the Novamente
project -- in spite of my feeling that this isn't ideal -- simply due to the
way the project is developing via commercial applications of the
half-completed system.  And, it seems likely that the initial partially
grounded experience will largely be in the domain of molecular biology... at
least, that's a lot of what our Novamente code is thinking about these
days...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben,

What you said to Debbie Duong sound intuitively right to me.  I think 
that most human intuition would be inferential rather than a simulation.  
but it seems that higher primates store a huge amount of data on the 
members of their clan - so my guess is that we do a lot of simulating of 
the in-group.  Maybe your comment about empathy throw intersting 
light on this.  If we simulate our in-group but use crude inferential 
intuition for most of the outgroup (except favourite enemies that we 
fixate on!!) then maybe that explains why we have so little empathy for 
the outgroup (and can so easily treat them abominably).

Given that simulation is much more computationally intensive it gives 
us a really strong reason for emphasising this capacityy in AGIs 
precisely because they may be able to escape our limitations in this 
area to great extent.  AGIs with strong simulation capacity could 
therefore be very valuable partners (complementors) for humans.

The empathy issue is interesting in the ethical context.  We can feel 
empathy because we can simulate the emotions of others.  Maybe the 
AllSeing AI needs to make an effort to not only simulate the 'thinking of 
other beings but also their emotions as well.  I guess you'd have to do 
that anyway since emotions affect thinking so strongly in many (most?) 
beings.

Cheers, Philip




You and I have chatted a bit about the role of simulation in cognition, in
the past.  I recently had a dialogue on this topic with a colleague (Debbie
Duong), which I think was somewhat clarifying.  Attached is a message I
recently sent to her on the topic.

-- ben



Debbie,

Let's say that a mind observes a bunch of patterns in a system S: P1,
P2,...,Pn.

Then, suppose the mind wants to predict the degree to which a new pattern,
P(n+1), will occur in the system S.

There are at least two approaches it can take:

1) reverse engineer a simulation S' of the system, with the property that
if the simulation S' runs, it will display patterns P1, P2, ..., Pn.  There
are many possible simulations S' that will display these patterns, so you
pick the simplest one you can find in a reasonable amount of effort.

2) Do probabilistic reasoning based on background knowledge, to derive the
probability that P(n+1) will occur, conditional on the occurence of
P1,...,Pn

My contention is that process 2 (inference) is the default one, with process
1 (simulation) followed only in cases where

a) fully understanding the system S is very important to the mind, so that
it's worth spending the large amount of effort required to build a
simulation of it [inference being much computationally cheaper]

b) the system S is very similar to systems that have previously been
modeled, so that building a simulation model of S can quickly be done by
analogy

About the simulation process.  Debbie, you call this process simulation;
in the Novamente design it's called predicate-driven schema learning, the
simulation S' being the a SchemaNode and the conjunction P1  P2  ...  Pn
being a PredicateNode.

We plan to do this simulation-learning using two methods

* combinator-BOA, where both the predicate and schema are represented as
CombinatorTrees.

* analogical inference, modifying existing simulation models to deal with
new contexts, as in case b) above

If we have a disagreement, perhaps it is just about the relative frequency
of processes 1 and 2 in the mind.  You seem to think 1 is more frequent
whereas I seem to think 2 is much more frequent.  I think we both agree that
both processes exist.

I think that our reasoning about other peoples' actions is generally a mix
of 1 and 2.  We are much better at simulating other humans than we are at
simulating nearly anything else, because we essentially re-use the wiring
used to control *ourselves*, in order to simulate others.

This re-use of self-wiring for simulation-of-others, as Eliezer Yudkowsky
has pointed out, may be largely responsible for the feeling of empathy we
get sometimes (i.e., if you're using your self-wiring to simulate someone
else, and you simulate someone else's emotions, then due to the use of your
self-wiring you're gonna end up feeling their (simulated) emotions to some
extent... presto! empathy...).




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben,  

 Well, this appears to be the order we're going to do for the Novamente
 project -- in spite of my feeling that this isn't ideal -- simply due
 to the way the project is developing via commercial applications of the
 half-completed system.  And, it seems likely that the initial
 partially grounded experience will largely be in the domain of
 molecular biology... at least, that's a lot of what our Novamente code
 is thinking about these days... 

The order might be the same but I don't think the initial content will be 
right - unless you intend to that a conscious Novababy is born into a 
molecular biology world/sandbox!

What were imagining the Novababy's firs simulated or real world would 
be?  A world with a blue square and a sim-self with certain senses and 
actuators?  Or whatever.  Then that is the world I think you'll need to 
help the Novababy understand bu giving it ready-made rules of thumb 
for interpreting the data generated in that precise world.  I'd be inclined 
to move on to a molecular biology world a little later in Novababy's life!  
:)

Anyway - you can test my conjectures very easily with a bit of 
experimentation.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Simulation and cognition

2004-02-04 Thread Ben Goertzel

 What you said to Debbie Duong sound intuitively right to me.  I think
 that most human intuition would be inferential rather than a simulation.
 but it seems that higher primates store a huge amount of data on the
 members of their clan - so my guess is that we do a lot of simulating of
 the in-group.  Maybe your comment about empathy throw intersting
 light on this.  If we simulate our in-group but use crude inferential
 intuition for most of the outgroup (except favourite enemies that we
 fixate on!!) then maybe that explains why we have so little empathy for
 the outgroup (and can so easily treat them abominably).

Good point.

And, simulating the in-group is easier for two reasons:

1) in-group members are similar to us, so we can use our self-models as
initial guesses for modeling other in-group members ... whereas if we want
to model out-group members, we need to do more learning from scratch

2) in-group is often smaller than the out-group: modeling a smaller range of
individuals requires less computational effort

Again i come to the conclusion that the root of all evil is not money, but
rather limitations on compute power...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben,

Maybe we do simulate a *bit* more with out groups than I first thought - 
but we do it using caricature stereotypes based on *ungrounded* data - 
ie. we refuse to use grounded data (from our ingroup), perhaps, since 
that would make these outgroup people uncomfortably too much like 
us.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Yan King Yin
From: Ben Goertzel [EMAIL PROTECTED]

Well, this appears to be the order we're going to do for the Novamente
project -- in spite of my feeling that this isn't ideal -- simply due to the
way the project is developing via commercial applications of the
half-completed system.  And, it seems likely that the initial partially
grounded experience will largely be in the domain of molecular biology... at
least, that's a lot of what our Novamente code is thinking about these
days...

Hi Ben

I'm very interested in applying automation to experimental molecular
biology, especially neurobiology. I think it will help neuroscience
a lot if complex experiments can be done automatically by AIs, but
I'm not sure about letting an AGI reason about molecular biology in
an abstract way. Which are you planning on? I'm also curious why you
picked this area.

YKY



Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel

  Well, this appears to be the order we're going to do for the Novamente
  project -- in spite of my feeling that this isn't ideal -- simply due
  to the way the project is developing via commercial applications of the
  half-completed system.  And, it seems likely that the initial
  partially grounded experience will largely be in the domain of
  molecular biology... at least, that's a lot of what our Novamente code
  is thinking about these days...

 The order might be the same but I don't think the initial content will be
 right - unless you intend to that a conscious Novababy is born into a
 molecular biology world/sandbox!

That may well be the case... a robotized bio lab as an AGI playroom... we'll
see!

 What were imagining the Novababy's firs simulated or real world would
 be?  A world with a blue square and a sim-self with certain senses and
 actuators?  Or whatever.  Then that is the world I think you'll need to
 help the Novababy understand bu giving it ready-made rules of thumb
 for interpreting the data generated in that precise world.

I think that in an environment in which the system has decent sensors and
actuators, no pre-specified rules of thumb will be needed (though some
perceptual preprocessing routines will be needed, just as the human visual
and acoustic cortex supply ...).  Pre-specified rules are useful for domains
where the system's ability to perceive and act are more limited.

 I'd
 be inclined
 to move on to a molecular biology world a little later in
 Novababy's life!

Well, we're already applying the incomplete AI system to molecular biology
in more limited, narrow-AI-ish ways, that was my point...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]