Re: [agi] Symbols in search of meaning

2003-02-27 Thread RSbriggs
In a message dated 2/26/2003 9:47:58 PM Mountain Standard Time, [EMAIL PROTECTED] writes:

Human children will learn that certain sound patterns are associated 
with patterned human behaviour. So very soon (plus or minus one 
year) children will start to accumulate awareness of words that they 
know are important because big people around them use those words - 
but the child has to expend mental effort to discover the meaning of the 
words. So, once this meta-behaviour is established, it is possible to 
download symbols into the young NGI and the youngster then begins 
the laborious task of attaching meaning to the words (derived from both 
experiential and taught leaning).

Agree. This is at least part of the reason that shameless self plug my Rogue-AI project started out from a mostly cognitive linquistics / semiontics base. Could probably win the Loebner prize... Not interested in entering... Hope to start teaching her to understand her source code by the end of the year the year


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
 
 Yep.  Novamente contains particular mechanisms for converting between
 declarative and procedural knowledge... something that is learned
 procedurally can become declarative and vice versa.  In fact, if all goes
 according to plan (a big if of course ;) Novamente should *eventually* be
 much better at this than the human brain.

I'm glad that you choose to incorporate elements of human cognitive theory into 
Novamente, even if you are not intent on building a brain.  Such commonalities will 
make the design of NM far more intuitive and accessible to designer and lay-person 
alike.

 
 For instance, humans are not very good at making procedural knowledge
 declarative -- it takes a rare human to be able to explain and understand
 how they do something they know how to do well.  There is a real algorithmic
 difficulty here, but even so, I think a lot of the difficulty that humans
 have in doing this is unnecessary, i.e. a consequence of the particular
 way the brain is structured rather than a consequence of the (admittedly
 large) difficulty of the problem involved.
 


I disagree that we have a problem converting procedural to declarative for all 
domains.  As an example, I can retrieve a phone number from procedural memory with 1 
retrieval operation (watch my fingers dial it).   Admittedly this system isn't as 
slick as one that would work purely internally, it requires performance of the task, 
but it works. 

Grammar is tougher, I can test any given rule by using it out in a sentence and seeing 
how it sounds.  But extrapolating all of the rules I use is a tricky problem, in fact 
it's one we haven't completely finished solving  (the rules of English grammar are 
similar, but not identical to the rules our brains want to use).   

And then communicating how to swing a golf club is another matter, but I think the 
limitation there lies in a lack of communication.  Our brains have no good way of 
transmitting or interpreting such fine grained information.  

And to be fair to our brains, transcribing a motor memory of how to move 10,000 
muscles in a very precise sequence into declarative knowledge is an extremely 
challenging problem.  Particularly because that sequence isn't static, but requires 
feedback from joint sensors.  The information isn't just the sequence of neural 
impulses, it's the substance of the entire network.


That said, Novamente would be far better at it than we.  With the ability to 
understand it's own code, NM could just rattle off the relevant parameters into 
declarative memory.  Making this declarative knowledge useful would require 
understanding how it functions though.   That would be the tricky part.   


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Ben Goertzel

Hi,

 I disagree that we have a problem converting procedural to
 declarative for all domains.

Sure, you're right.  Here as in many other areas, the human brain's
performance is highly domain-variant.

 That said, Novamente would be far better at it than we.  With the
 ability to understand it's own code, NM could just rattle off the
 relevant parameters into declarative memory.  Making this
 declarative knowledge useful would require understanding how it
 functions though.   That would be the tricky part.

Right.  This doesn't require source-code-analysis either, just an
understanding of learned parameters existing within the run-time state of
the system.

Indeed, making the declarative knowledge derived from rattling off
parameters describing procedures useful is a HARD problem... but at least
Novamente can get the data, which as you have greed, would seem to give AI
systems an in-principle advantage over humans in this area...

It's hard to overestimate the intelligence-enhancement potential of a more
fluid process of interconversion btw declarative and procedural knowledge
...

Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble

 
 Indeed, making the declarative knowledge derived from rattling off
 parameters describing procedures useful is a HARD problem... but at least
 Novamente can get the data, which as you have greed, would seem to give AI
 systems an in-principle advantage over humans in this area...
 
 It's hard to overestimate the intelligence-enhancement potential of a more
 fluid process of interconversion btw declarative and procedural knowledge


Yes, getting this data is what the entire field of neurophys is about.  Being able to 
extract it without using surgery, electrodes, amplifiers, and gajillions of manhours 
would be outstanding.   A lack of data is the primary thing holding neuroscience back 
and to a large degree, the depth of cognitive theory over time mirrors the quality of 
the acquisition and analysis tools. 

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Ben Goertzel
 Yes, getting this data is what the entire field of neurophys is
 about.  Being able to extract it without using surgery,
 electrodes, amplifiers, and gajillions of manhours would be
 outstanding.   A lack of data is the primary thing holding
 neuroscience back and to a large degree, the depth of cognitive
 theory over time mirrors the quality of the acquisition and
 analysis tools.

 -Brad


That was exactly my impression when I last looked seriously into
neuroscience (1995-96).  I wanted to understand cognitive dynamics, and I
hoped that tech like PET and fMRI would do the trick.  But nothing existing
giving the combination of temporal and spatial acuity that you'd need to
even make a start on the problem  I had a PhD student (Graham Zemunik)
who tried to make a detailed model of the cognitive dynamics in a
cockroach's brain -- and even that was pretty dicey because the data found
by different researchers was often inconsistent.  From what you're
describing, some headway is finally being made on modeling cognitive
dynamics in parts of the rat's brain, and that's a great thing.  I've
enjoyed following Walter Freeman's work on olfaction in rabbits, but, I've
also noticed the pattern of bold hypotheses and partial retractions in his
work over time, which is due to the fact that the data is not quite rich
enough to support the kind of theorizing he wants to do.

Fortunately, neuro-analysis technologies are advancing really fast just like
computer chips.  In another 10-30 years we will have the data to understand
our brains, and the computers and algorithms to crunch this data.  (And we
may have AI's to do the work for us, who knows ;)

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
 
 That was exactly my impression when I last looked seriously into
 neuroscience (1995-96).  I wanted to understand cognitive dynamics, and I
 hoped that tech like PET and fMRI would do the trick.  But nothing existing
 giving the combination of temporal and spatial acuity that you'd need to
 even make a start on the problem  I had a PhD student (Graham Zemunik)

Just FYI, MEG's (Magnetoencephalography) is a good step in providing temporal 
precision, but is still a long way from discerning individual neurons.  It can 
basically give us EEG measurements from deep inside the brain without using 
electrodes(which obviously opens alot of doors for human experimentation) 

 who tried to make a detailed model of the cognitive dynamics in a
 cockroach's brain -- and even that was pretty dicey because the data found
 by different researchers was often inconsistent.  From what you're

I'm sure you know this, but for the benefit of others:
Insect brains are much easier to study because the neurons are explicitly laid down by 
the genetic code.  They are identifiable neuron by neuron and are roughly identical 
from insect to insect (within the same species).   The fact that even these networks 
aren't quite yet understood is a shining example of how far we have to go in 
understanding the human brain.

 describing, some headway is finally being made on modeling cognitive
 dynamics in parts of the rat's brain, and that's a great thing.  I've
 enjoyed following Walter Freeman's work on olfaction in rabbits, but, I've
 also noticed the pattern of bold hypotheses and partial retractions in his
 work over time, which is due to the fact that the data is not quite rich
 enough to support the kind of theorizing he wants to do.
 

I support fringe theorists like Freeman as long as they stay in touch with the 
community and don't sail off to parts unknown.  (Edelman tends to do this).  Progress 
takes all types, the careful, methodical data collectors, and the people on the front 
lines pushing the theories to extents that the data barely supports.  


 Fortunately, neuro-analysis technologies are advancing really fast just like
 computer chips.  In another 10-30 years we will have the data to understand
 our brains, and the computers and algorithms to crunch this data.  (And we
 may have AI's to do the work for us, who knows ;)
 

Here's hoping.  Although I fear they probably said similar things 10-30 years ago.  
Only nanotech can get us the type of noninvasive, detailed data that we need.  The 
type of electrodes we currently use are never going to suffice. 

Lucky for us that the brain uses electrically recordable signals from a structure that 
is so easily accessible.   We'd be in dire straits if the brain used entirely chemical 
mechanisms and was located in an abdominal sack.   Thank you evolution for making our 
jobs as easy as they are :)

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble

 
 I actually have a big MEG datafile on my hard drive, which I haven't gotten
 around to playing with.
 
 It consists of about 120 time series, each with about 100,000 points in it.
 
 It represents the magnetic field of someone's brain, measured through 120
 sensors on their skull, while they sit in a chair and perform an experiment
 of clicking a button when they see a line appear on a screen.  (Pretty
 exciting, huh?)  I got the data from my friend Barak Pearlmutter at UNM, who
 has spent a few years working on signal processing tools (using blind
 source signal separation methods) designed to clean up the raw data
 (basically subtracting off for noise caused by repeated reflection of
 magnetic fields off the inside of the skull).

It's actually a very complicated data analysis.  You basically have a spherical 
surface of data (the sensors), and you are trying to reconstruct the sources and sinks 
in 3d that created the 2d data you are observing.  The problem is underconstrained, 
because many 3d data sets could produce the same 2d data set, but you try to build in 
some anatomical assumptions (ie: we know the hippocampus is probably a powerful 
source/sink, so pin that thumbtack there) to constrain the possible results.

As you can imagine it's very weak spatially, but far more precise temporally than PET 
or FMRI, which can only measure blood flow changes occuring 1 second or more after the 
source activity.

I think combined MEG/FMRI(or was it PET/FMRI) is going to be able to get the best of 
both worlds.  Either way, there are plenty of technological obstacles.   

  
 
 I guess that MEG can be used, over time, for stuff subtler than clicking
 buttons when lines appear.  But using it to track the dynamics of thoughts
 seems a long way off  Basically, one needs a lot more than 120 sensors!!
 ... and then one needs to hope the signal processing code scales well (it
 probably can be made to do so)
 

Well you can use far more complex behavioral tasks than that even with existing MEG 
technology(have people navigate a maze, do math, word problems, etc).  But in order to 
get a footing with the new MEG technology, they need to start at the basics so that 
they can map MEG responses with known EEG signatures available from work that's 
already been done.  The first decade of any new neurophys technique is characterized 
by a whirlwind of very basic, boring results (usually that create pretty pictures 
generating funding).  Only after the tech has matured do you even begin to hit the 
cool stuff.  


I'll bet AI's will be required to analyze the data sets will be getting in the next 20 
years. 

-brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Ben Goertzel


  We need one of the technologies to evolve to the point where it delivers
  decent spatial AND temporal resolution...


 That's exactly what I meant actually: combined FMRI and MEG
 within the same experiment.  You get data from each
 simultaneously and combine them afterwards, using the spatially
 precise FMRI data to pin down the temporally precise MEG data.
 It's hard to squeeze a MEG rig into an FMRI machine at the
 moment, particularly without using ferrous metals (ouch).  But
 I'm sure they'll figure it out in the near future.

Fascinating!  I didn't know that was possible...  But, it's all magnetism, I
guess -- the rest is details ;-)

ben g

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-02-27 Thread Philip Sutton



Ben,

 One question 
is whether it's enough to create general
 pattern-recognition 
functionality, and let it deal with seeking
 meaning for symbols 
as a subcase of its general behavior. Or does
 one need to create 
special heuristics/algorithms/structures just for
 guiding this 
particular process? 

Bit of both I think. Its a bit 
like there's a search for 'meaning' and a search 
for 'Meaning'.


I think all AGIs need to search for 
meaning behind patterns to be able to 
work out useful cause/effect webs. And when AGIs work with symbols this 
general 'seeking the meaning of patterns' process can be applied as the first 
level of contemplation.


But in the ethical context I think 
we are after 'Meaning' where this relates to 
to some notion of the importance of 
the pattern or symbol for some 
significant entity - for the AGI, the AGIs mentors, other sentient beings and 
other life.


At the moment you have truth and attention 
values attached to nodes and 
links. I'm wondering whether you need to have a third numerical value type 
relating to 'importance'. Attention has a temporal implication - it's intended 
to focus significant mental resources on a key issue in the here and now. 
And truth values indicate the reliability of the data. Neither of these 
concepts capture the notion of importance.


I guess the next question is, what 
would an AGI do with data on importance. 
I'm just thinking off the top of my head, but my guess is that if the nodes 
and links had high importance values but low truth values that this should 
set up an 'itch' in the system driving the AGI to engage in learning and 
contemplation that would lift the truth values. Maybe the higher the 
dissonance between the importance values and the truth values, the more 
this would stimulate high attention values for the related nodes and links.


Then there's the question of what 
would generate the importance values. I 
think these values would ultimately be derived from the perceived 
importance values conveyed by 'significant others' for the AGI and by the 
AGIs own ethical goal structure.


 I don't think 
that preloading symbols and behavior models for
 something as 
complex as *ethical issues* is really going to be
 possible. I think 
ethical issues and associated behavior models are
 full of nuances 
that really need to be learned. 

Of course ethical issues and 
associated behavior models are full of 
nuances that really need to be learned to make much deep sense. Even 
NGIs like us, with presumably loads of hardwired predisposition to ethical 
behaviour, can spend their whole life in ethical learning and contemplation! 
:) 


So I guess the issues are (a) whether 
it's worth preload ethical concepts and 
(b) whether it's possible to do it.


I'll start with (b) first and then 
cosider (a) (since lots of people have a 
pragmatic tendency not to bother about issues till the means for acting on 
them are available).


(Please bear in mind that I'm not 
experienced or expert in any of the 
domains I'm riding rough shod over.everything I say will be intuitive 
generalist ideas...)


Let's take the hardest case first. 
Let's take the most arcane abstract 
concept that you can think of or the one that has the most intricate and 
complex implications/shades of meaning for living.


Lets label the concept B31-58-DFT. 
We create a register in the AGI 
machinery to store important ethical concepts. We load in the label B31-
58-DFT and we give it a high importance value. We also load in a set of 
words in quite a few major languages into two other registers - one set of 
words are considered to have meaning very close to the concept that we 
have prelabelled as B31-58-DFT. We also load in words that are not the 
descriptive *meaning* of the B31-58-DFT concept but are often associated 
with it. We then set the truth value of B31-58-DFT to, say, zero. We also 
create a GoalNode associated to B31-58-DFT that indicates whether the 
AGI should link B31-58-DFT to its positive goal structure or to its negative 
goal structure ie. is B31-58-DFT more of an attractor or a repeller concept?


(BTW, most likely there would need 
to be some system for ensuring that the 
urge to contemplate concept B31-58-DFT didn't get so strong that the AGI 
was incapable of doing anything else.)


We could also load in some body-language 
patterns often observed in 
association with the concept if there are such things in this case eg. smiles 
on human faces, wagging tails on dogs, purring in cats, etc. (or some other 
pattern, eg. (1) bared teeth, growling hissing, frowns, red faces; (2) pricked 
ears, lifted eye brows, quite alterness; and so on).


We make sure that the words we load 
in to the language registers include 
words that the AGI in the infantile stages of development might most likely 
associate with concept B31-58-DFT - so that the assocation between the 
prebuilt info about B31-58-DFT and what the AGI learns early in its life can