Re: [agi] The Missing Piece

2007-03-11 Thread YKY (Yan King Yin)

Hi John,

Re your idea that there should be an intermediate-level representation:

1.  Obviously, we do not currently know how the brain stores that
representation.  Things get insanely complex as neuroscientists go higher up
the visual pathways from the primary visual cortex.

2.  I advocate using a symbolic / logical representation for the 3D (in
fact, 4D) space.  There might be some misunderstanding here because we tend
to think the sensory 4D space is *sub*symbolic.  This is actually just a
matter of terminology.  For example, if block A is on top of block B then
I may put a symbolic link labeled as is_on_top_off between the 2 nodes
representing A and B.  Is such a link symbolic or subsymbolic?  Nodes and
links such as John loves Mary are clearly symbolic because they
correspond to natural-language words.  But in a logical representation there
can be many nodes/links that does NOT map directly to words.

The point here is that a logical representation is *sufficient* to model a
physical word facsimile.  If you disagree this, can you give an example of
something that cannot be represented in the logical way?

2.  To help you better understand the issue here, notice that a fine-grained
representation would eventually need to become coarse-grained -- information
must be lost along the way, otherwise there would be memory shortage within
hours of sensory perception.  The logical representation is precisely such a
coarse-grained one.  Technically, as you go to the finer resolutions in the
logical representation, the elements get a more subsymbolic flavor.

3.  Can you name certain features of your representation that is different
from a logical one?

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-11 Thread YKY (Yan King Yin)

On 3/8/07, Matt Mahoney [EMAIL PROTECTED] wrote:

[re: logical abduction for interpretation of natural language]

One disadvantage of this approach is that you have to hand code lots of
language knowledge.  They don't seem to have solved the problem of

acquiring

such knowledge from training data.  How much effort would it be to code

enough

knowledge to pass the Turing test?  Nobody knows.

Using this method the linguistic rules may be hand-coded or learned (via
inductive logic programming).  Learning is not easy, but is still possible.

Re your method:

1.  Remember, in your NN approach the learning space is even more
fine-grained and the network configuration space is insanely huge.  That
means, your system will take insanely long to train.  In ADDITION, you
cannot insert hand-coded rules like I do, because your system is opaque.

2.  Also, training your NN layer-by-layer would be incorrect because the
layers depends on each other to function correctly, in some mysterious /
opaque ways.  Freezing each layer after training will drive you straight
into a local minimum, which is guaranteed to be useless.  If you backtrack
from the local minimum, then you're exploring the global search space of all
network configuration, ie an insanely huge space.

All in all, the logic based approach seems to be the best choice because
learning can be augmented with hand-coding.  Certainly adding hand-coded
knowledge helps speedup the learning process.  And if we solicit the
internet community to help with hand-coding, it helps even more.


Also, what do you do with the data after you get it into a structured

format?

I think the problem of converting it back to natural language output is

going

to be at least as hard.  The structured format makes use of predicates

that

don't map neatly to natural language.




The inverse problem can probably be solved automatically if the logic is
reversible, which I believe is.  In other words, given a logical form, an
inference engine can use searching to generate NL sentences using the same
logical knowledge / constraints.


The paper is not dated, but there are no references after 1991.  I wonder

why

there has been no real progress using this approach in the last 16 years.


It was first published in 1993, but he's still working on it as a book
chapter to be out soon.

The whole project is a large-scale one and we'd need a knowledge
representation scheme to go with it.  But this paradigm is by far the
most promising because it addresses the entire NL problem instead of
a narrow facet of it.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-11 Thread Ben Goertzel

YKY (Yan King Yin) wrote:
 
Hi John,
 
Re your idea that there should be an intermediate-level representation:
 
1.  Obviously, we do not currently know how the brain stores that 
representation.  Things get insanely complex as neuroscientists 
go higher up the visual pathways from the primary visual cortex.
 
2.  I advocate using a symbolic / logical representation for the 3D 
(in fact, 4D) space.  There might be some misunderstanding here 
because we tend to think the sensory 4D space is *sub*symbolic.  This 
is actually just a matter of terminology.  For example, if block A is 
on top of block B then I may put a symbolic link labeled as 
is_on_top_off between the 2 nodes representing A and B.  Is such a 
link symbolic or subsymbolic?  Nodes and links such as John loves 
Mary are clearly symbolic because they correspond to 
natural-language words.  But in a logical representation there can be 
many nodes/links that does NOT map directly to words.
 
The point here is that a logical representation is *sufficient* to 
model a physical word facsimile.  If you disagree this, can you give 
an example of something that cannot be represented in the logical way?
Yes, of course it's sufficient in principle, but it's not adequately 
efficient!  

To accurately represent a physical scene in all its details, using 
explicit formal logic, will occupy a huge amount of memory; and even 
more critically, it will render a lot of useful inferences about 
physical objects extremely inefficient...


 
2.  To help you better understand the issue here, notice that a 
fine-grained representation would eventually need to become 
coarse-grained -- information must be lost along the way, otherwise 
there would be memory shortage within hours of sensory perception.  
The logical representation is precisely such a coarse-grained one.  
Technically, as you go to the finer resolutions in the logical 
representation, the elements get a more subsymbolic flavor.
 
3.  Can you name certain features of your representation that is 
different from a logical one?
 
In the case of Novamente, here is one example: a recognizer for chairs 
(in the sense of the pieces of furniture that we often sit on).


A Novamente system contains logical knowledge about chairs, but also 
contains little programs that evaluate collections of percepts and 
decide if such a collection shows a chair or not.


These programs may combine arithmetic and logic operations, and will 
generally be learned via evolutionary or greedy algorithms not by 
logical reasoning.


This example highlights one important point: logic is often very 
inefficient at handling QUANTITATIVE information.  Of course it can do 
so -- after all, calculus and such can ultimately be formalized fully in 
terms of mathematical logic; but these formalisms are cumbersome and are 
not what you use to actually to calculus


And, perception and action have a lot to do with managing large masses 
of quantitative information.


IMO, a key aspect of AGI is having effective means for the 
interoperation of logical and nonlogical knowledge.


In the brain, I believe, logical inference and nonlogical pattern 
recognition are achieved via different connectivity patterns: both 
logical reasoning and nonlogical pattern recognition are carried out via 
the same long-term potentiation and activation spreading dynamics, but
-- logic has to do with coordinated potentiation of bundles of synapses 
btw cortical columns
-- nonlogical pattern recognition has more to do with hierarchical 
dynamics, as outlined by Mountcastle, Hawkins and many others


In Novamente, the logic module is in principle able to intake and reason 
about pattern recognized nonlogically (e.g. using the laws of algebra to 
reason about quantitative patterns), but, this is not always a useful 
expenditure of resources...


-- Ben G

-- Ben






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-10 Thread Mark Waser
In the Novamente design this is dealt with via a currently unimplemented 
aspect of the design called the internal simulation world.  This is a 
very non-human-brain-like approach


Why do you believe that this is a very non-human-brain-like approach? 
Mirror neurons and many other recent discoveries tend to make me believe 
that the human brain does itself have (or indeed, is) an internal simulation 
world.



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, March 10, 2007 9:19 AM
Subject: Re: [agi] The Missing Piece




John,

It is certainly clear that mental imagery plays a role in human thinking, 
but this role does appear to vary from person to person, both in extent 
and in nature.  Take a look at Hadamard's old book The Psychology of 
Mathematical Invention for a fascinating discussion of the different 
sorts of mental imagery pursued by different people (visual, acoustic, 
verbal, etc.).   I myself use a lot of visual and auditory imagery in my 
own thinking, but I know others who do not (at least not at the conscious 
level).


In the Novamente design this is dealt with via a currently unimplemented 
aspect of the design called the internal simulation world.  This is a 
very non-human-brain-like approach, but I think it's an interesting and 
ultimately very powerful one.  What it means is that NM will actually 
have, internally, a private 3D world-simulation, complete with a simple 
physics engine.  It can use this internal sim to experiment with 
hypothetical actions in hypothetical situations, but also to draw various 
abstract sketches and movies that don't correspond to any real-world 
phenomena.
We haven't implemented this part yet due to the familiar lack of adequate 
human resources, but I think it will be a valuable addition to NM's 
cognitive arsenal.  For the sim world, we would use the CrystalSpace 
engine that we are now using (in the AGISim project) to give NM a 
sim-world to use for embodiment and interaction with humans...


I don't really see mental imagery as a critical missing link btw the 
symbolic and the subsymbolic.  In NM, there is interaction  translation 
between symbolic and subsymbolic knowledge without need for mental 
imagery.  However, in some cases mental imagery can provide insights that 
would be hard to come by otherwise.


-- Ben G

John Scanlon wrote:
My philosophy of AI has never been logic-based or neural-based.  I did 
explore neural nets during the neural-net mania of the nineties.  I did a 
lot of reading, and experimented with some with feedforward nets I wrote 
using simulated annealing and backpropagation (which never did work very 
well).  Neural nets seem to have potential as one tool among several 
types of incremental learning algorithms, including genetic algorithms 
and statistical methods, but in themselves, they are no more than that --  
useful tools, but not the solution.
 Language, which includes logic, is a way of representing ideas simply 
and crudely.  Good for communication and internal reasoning -- if I do 
this then this will happen, unless state X is the case, which means that 
this other thing will happen, etc.  My project uses an artificial 
language (Jinnteera) for both these things, and the language is integral 
to the whole thing.  But it does not function as the core 
knowledge-representation scheme.
 So this brings us to what I've been calling the missing piece. 
Artificial neural nets (as they currently exist) can function as 
general-learning algorithms, but they don't represent knowledge of the 
real spatiotemporal world well.  They are too low-level for handling what 
in human intelligence is thought of as mental imagery.  Yes, in the 
brain, it is all neural based, but in a non-massively-parallel von Neuman 
computer system (even a PDP system), building a 100-billion-node neural 
net is computationally intractable (is that the right word?).  It has to 
be done differently.
 The missing piece lies between low-level learning algorithms and 
highest-level logical-linguistic knowledge representation.  When a human 
translator, at the U.N., for example, translates between Chinese and 
English, he (or she) does it infinitely more effectively than any 
translation software could do it, because there is an intermediate 
knowledge representation that is neither Chinese nor English, but that 
can be readily translated to or from either language by a fluent speaker. 
The intermediate knowledge representation is non-linguistic -- it 
consists of mental models constructed of sensorimotor patterns 
representing a 3-D temporal world.
 This sounds very vague and abstract, but I'm working on making it 
concrete, in my system (Gnoljinn) -- developing the data structures in 
code for implementing this knowledge-representation scheme.  There's been 
some talk here recently about 3-D vision systems, and this points roughly 
in the direction I'm going in.  Gnoljinn uses a single sensory

Re: [agi] The Missing Piece

2007-03-10 Thread Ben Goertzel

Mark Waser wrote:
In the Novamente design this is dealt with via a currently 
unimplemented aspect of the design called the internal simulation 
world.  This is a very non-human-brain-like approach


Why do you believe that this is a very non-human-brain-like 
approach? Mirror neurons and many other recent discoveries tend to 
make me believe that the human brain does itself have (or indeed, is) 
an internal simulation world.


In a sense we do, but it's not implemented in the brain as an actual sim 
world with a physics engine and so forth ... our internal sim world is a 
lot less physically accurate (more naive physics than correct 
equational physics), and probably gains some kinds of creativity from 
this as well as losing a lot of potential for other kinds of creativity...


ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-10 Thread Russell Wallace

On 3/10/07, Ben Goertzel [EMAIL PROTECTED] wrote:


In a sense we do, but it's not implemented in the brain as an actual sim
world with a physics engine and so forth



Yes it is, or at least a reasonable facsimile thereof.

... our internal sim world is a

lot less physically accurate (more naive physics than correct
equational physics), and probably gains some kinds of creativity from
this as well as losing a lot of potential for other kinds of creativity...



It's not calculated to 16 digits of precision of course, but it's very much
better than naive physics - consider that we are able to recognize naive
physics as unrealistic! (One of my favorite examples of something humans can
understand that a purely symbolic or naive physics engine could never make
head or tail of: making a bed by flicking the blanket at the edge - why does
that work?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-10 Thread Eugen Leitl
On Sat, Mar 10, 2007 at 10:11:19AM -0500, Ben Goertzel wrote:

 In a sense we do, but it's not implemented in the brain as an actual sim 
 world with a physics engine and so forth ... our internal sim world is a 

I'm not sure we know how it's implemented. A lot of things are done
by topographic maps, which are equivalent to coordinate transformations.
I don't think this is a bad representation, if you're interested
in minimizing gate delays to few 10 deep when processing reasonably
complex stimuli in realtime. If you want to do within ~ns what
biology does within ~ms you don't have a lot of choices.

 lot less physically accurate (more naive physics than correct 
 equational physics), and probably gains some kinds of creativity from 

It's certainly good enough for monkey behaviour planning. It's rather
useless for Mach 25 atmospheric reentry, or magnetar physics, agreed.

 this as well as losing a lot of potential for other kinds of creativity...

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] The Missing Piece

2007-03-09 Thread John Scanlon
My philosophy of AI has never been logic-based or neural-based.  I did explore 
neural nets during the neural-net mania of the nineties.  I did a lot of 
reading, and experimented with some with feedforward nets I wrote using 
simulated annealing and backpropagation (which never did work very well).  
Neural nets seem to have potential as one tool among several types of 
incremental learning algorithms, including genetic algorithms and statistical 
methods, but in themselves, they are no more than that -- useful tools, but not 
the solution.

Language, which includes logic, is a way of representing ideas simply and 
crudely.  Good for communication and internal reasoning -- if I do this then 
this will happen, unless state X is the case, which means that this other thing 
will happen, etc.  My project uses an artificial language (Jinnteera) for both 
these things, and the language is integral to the whole thing.  But it does not 
function as the core knowledge-representation scheme.

So this brings us to what I've been calling the missing piece.  Artificial 
neural nets (as they currently exist) can function as general-learning 
algorithms, but they don't represent knowledge of the real spatiotemporal world 
well.  They are too low-level for handling what in human intelligence is 
thought of as mental imagery.  Yes, in the brain, it is all neural based, but 
in a non-massively-parallel von Neuman computer system (even a PDP system), 
building a 100-billion-node neural net is computationally intractable (is that 
the right word?).  It has to be done differently.

The missing piece lies between low-level learning algorithms and highest-level 
logical-linguistic knowledge representation.  When a human translator, at the 
U.N., for example, translates between Chinese and English, he (or she) does it 
infinitely more effectively than any translation software could do it, because 
there is an intermediate knowledge representation that is neither Chinese nor 
English, but that can be readily translated to or from either language by a 
fluent speaker.  The intermediate knowledge representation is non-linguistic -- 
it consists of mental models constructed of sensorimotor patterns representing 
a 3-D temporal world.

This sounds very vague and abstract, but I'm working on making it concrete, in 
my system (Gnoljinn) -- developing the data structures in code for implementing 
this knowledge-representation scheme.  There's been some talk here recently 
about 3-D vision systems, and this points roughly in the direction I'm going 
in.  Gnoljinn uses a single sensory modality right now -- vision -- and will be 
restricted to it for a good while, because, while it might be useful to have 
other sensory modalities, none of them are absolutely necessary for higher 
intelligence, and it's best to keep things as simple as possible starting out.

I seriously wonder if I can do this project myself, or whether I need to try to 
find some collaborators.



Yan King Yin wrote:
  John Scanlon wrote: 
   [...]
   Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket calculator.
   [...]


  Hi John,

  I admire your attitude for attacking the core AI issues =)

  One is either neural-based or logic-based, using a crude dichotomy.  So your 
approach is closer to neural-based?  Mine is closer to the logic-based end of 
the spectrum. 

  You did not have a real argument against logical AI.  What you said was just 
some sentiments about the ill-defined concept of thought.  You may want to 
take some time to express an argument why logic-based AI is doomed.  In fact, 
both Ben's and my system have certain neural characteristics, eg being 
graphical, having numerical truth values, etc. 

  In the end we may all end up somewhere between logic and neural...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-07 Thread YKY (Yan King Yin)

On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:

[...]
Logical deduction or inference is not thought.  It is mechanical symbol

manipulation that can can be programmed into any scientific pocket
calculator.

[...]


Hi John,

I admire your attitude for attacking the core AI issues =)

One is either neural-based or logic-based, using a crude dichotomy.  So your
approach is closer to neural-based?  Mine is closer to the logic-based end
of the spectrum.

You did not have a real argument against logical AI.  What you said was just
some sentiments about the ill-defined concept of thought.  You may want to
take some time to express an argument why logic-based AI is doomed.  In
fact, both Ben's and my system have certain neural characteristics, eg
being graphical, having numerical truth values, etc.

In the end we may all end up somewhere between logic and neural...

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-07 Thread YKY (Yan King Yin)

On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:

What about English?  Irregular grammar is only a tiny part of the language
modeling problem.  Uaing an artificial language with a regular grammar to
simplify the problem is a false path.  If people actually used Logban

then

it would be used in ways not intended by the developer and it would

develop

all the warts of real languages.  The real problem is to understand how

humans

learn language.


Hi, Matt =)

I discovered something cool:  computational pragmatics.  You may take a look
at Jerry R Hobbs' paper: Interpretation as Abduction, where he has a very
powerful method of interpreting NL sentences, even dealing with things
like metonymy and syntactic ambiuguity, the warts of real languages.

http://www.isi.edu/~hobbs/interp-abduct-ai.pdf

This seems to be the missing piece for successfully employing the logical
approach to NL processing.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-07 Thread J. Storrs Hall, PhD.
On Wednesday 07 March 2007 10:34, YKY (Yan King Yin) wrote:
 I discovered something cool:  computational pragmatics.  You may take a
 look at Jerry R Hobbs' paper: Interpretation as Abduction, ...

Nice. Note that one of the reasons that I'm going the numerical route is that 
some powerful methods for abduction are already out there, e.g. maximum 
entropy (see e.g. http://cmm.cit.nih.gov/maxent/letsgo.html).

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-07 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  What about English?  Irregular grammar is only a tiny part of the language
  modeling problem.  Uaing an artificial language with a regular grammar to
  simplify the problem is a false path.  If people actually used Logban
 then
  it would be used in ways not intended by the developer and it would
 develop
  all the warts of real languages.  The real problem is to understand how
 humans
  learn language.
 
 Hi, Matt =)
 
 I discovered something cool:  computational pragmatics.  You may take a look
 at Jerry R Hobbs' paper: Interpretation as Abduction, where he has a very
 powerful method of interpreting NL sentences, even dealing with things
 like metonymy and syntactic ambiuguity, the warts of real languages.
 
 http://www.isi.edu/~hobbs/interp-abduct-ai.pdf
 
 This seems to be the missing piece for successfully employing the logical
 approach to NL processing.
 
 YKY

One disadvantage of this approach is that you have to hand code lots of
language knowledge.  They don't seem to have solved the problem of acquiring
such knowledge from training data.  How much effort would it be to code enough
knowledge to pass the Turing test?  Nobody knows.

Also, what do you do with the data after you get it into a structured format? 
I think the problem of converting it back to natural language output is going
to be at least as hard.  The structured format makes use of predicates that
don't map neatly to natural language.

The paper is not dated, but there are no references after 1991.  I wonder why
there has been no real progress using this approach in the last 16 years.

However, the paper has lots of nice examples showing how natural language is
hard to process.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-03-01 Thread Andrii (lOkadin) Zvorygin

Hmmm, if you could put on some basic rules on the randomness(in a database
of Lojban that gives a random statement or series of statements), say to
accept logical statements that could then be applied onto input.  So say you
same something like le MLAtu cu GLEki (the cat is happy) and later make a
statement le MLAtu and press return it could ask you cu GLEki gi'a mo (is
happy or is what function?).

If it was to be a chat bot, it could wait for a reply and if it believes no
one is interested it could offer a random phrase as a topic such as le
MLAtu cu GLEki.

So maybe some can try approaching AI from the other way around? Instead of
going bottom up of purely unambiguous code to restricted randomness of
interaction. To go from pure randomness to restricted randomness of
interaction.

Does anyone know what would be a good language to do that in? I think I
recall there being a programming language based on set theory that was all
about streams.

On 2/28/07, Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:


Do they tell us what grief is doing when a loved one dies?

Well the grief that is felt when a loved one dies is similar to that of
unreturned love. So you love them, and they don't love you back -- as they
are dead. This causes a feeling of futility and eventually changes direction
-- to focus more inwardly mu'a(in example) self-pity/self-love where you
give yourself supporting beliefs rather than a different person.

Do these inference system tell us why we get depressed when we keep
 failing to accomplish our goals?

Why implies causation, which is something that is system specific and not
an inherant property of the universe.  So you'd have to ask yourself as the
computer that created the rule set of failing to achieve goals causes
depression.

Personally I just choose not to fail. If I do, then I accept that it was I
that set the standards -- perhaps to do something about it later.

Do they give a model for understanding why we feel proud when we are
 encouraged by our parents?

As a child you give power to your parents. So when your parents encourage
you, they hold the belief that you will feel happy, and so you do -- being a
child is giving others the responsibility for their environment.  Many
mortal Homo Sapiens can be considered children in that sense.

So if you could imagine all mathematical expressions as a 3d fabric, where
sentient creatures are droplets or sets of these mathematical
expressions.  You can envision two parents sharing a similar space in the
fabric (at least time/location)  and they form another droplet between
the two of them. A sort of seeding of consciousness.

It is possible to create this kind of mathematical fabric. I think it
would be very intersteing if we could figure out how, as then we would be
able to map Homo Sapiens as well as other related conceptual species, maybe
even figure out how to cross the belief barriers to access them.

I'm not really sure what such a belief fabric would consist of.  Though
it is possible that we could just make a large database of beliefs in some
logical language (Lojban) and  have people describe their own beliefs, then
we would be able to expand this if we got it onto a distributed network.  If
we get some people that believe they are aliens, or have significantly
different beliefs and implications than we do, we could make a claim to
first contact.

*shrugs* it would be relatively simple to implement.  Only concievable
issue is lack of Lojban speakers.

coding isn't useless, especially on the small scale where you grasp what
is happening. When you can no longer grasp what is happening, things are
random which is a sign of intelligence -- you couldn't predict my reply,
and hence it was random.  Though you could just as easily control your
reality by keeping a record of the things you believe and changing them when
you want a change.

An interesting thing to try out would be to have a set of
beliefs/statements (perhaps that you want the computer to have) then you
have a purely random number generator to select a belief at random to
output.  You could also add beliefs/statements to the file by saying them.
Could probably have a relatively intelligent conversation with the
computer.  Typically will reply with what you expect it to.




On 2/20/07, Bo Morgan [EMAIL PROTECTED] wrote:


 On Tue, 20 Feb 2007, Richard Loosemore wrote:

 ) Chuck Esterbrook wrote:
 )  On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
 )   Language is the manipulation of symbols.  When you think of how a
 )   non-linguistic proto-human species first started using language,
 you can
 )   imagine creatures associating sounds with images -- oog is the
 big hairy
 )   red ape who's always trying to steal your women.  akk is the
 action of
 )   hitting him with a club.
 )  
 )   The symbol, the sound, is associated with a sensorimotor
 pattern.  The
 )   visual pattern is the big hairy red ape you know, and the motor
 pattern is
 )   the sequence of muscle 

Re: [agi] The Missing Piece

2007-03-01 Thread Matt Mahoney

--- Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:

 Hmmm, if you could put on some basic rules on the randomness(in a database
 of Lojban that gives a random statement or series of statements), say to
 accept logical statements that could then be applied onto input.  So say you
 same something like le MLAtu cu GLEki (the cat is happy) and later make a
 statement le MLAtu and press return it could ask you cu GLEki gi'a mo (is
 happy or is what function?).
 
 If it was to be a chat bot, it could wait for a reply and if it believes no
 one is interested it could offer a random phrase as a topic such as le
 MLAtu cu GLEki.
 
 So maybe some can try approaching AI from the other way around? Instead of
 going bottom up of purely unambiguous code to restricted randomness of
 interaction. To go from pure randomness to restricted randomness of
 interaction.
 
 Does anyone know what would be a good language to do that in? I think I
 recall there being a programming language based on set theory that was all
 about streams.

What about English?  Irregular grammar is only a tiny part of the language
modeling problem.  Uaing an artificial language with a regular grammar to
simplify the problem is a false path.  If people actually used Logban then
it would be used in ways not intended by the developer and it would develop
all the warts of real languages.  The real problem is to understand how humans
learn language.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-28 Thread Andrii (lOkadin) Zvorygin

Do they tell us what grief is doing when a loved one dies?

Well the grief that is felt when a loved one dies is similar to that of
unreturned love. So you love them, and they don't love you back -- as they
are dead. This causes a feeling of futility and eventually changes direction
-- to focus more inwardly mu'a(in example) self-pity/self-love where you
give yourself supporting beliefs rather than a different person.

Do these inference system tell us why we get depressed when we keep
failing to accomplish our goals?

Why implies causation, which is something that is system specific and not an
inherant property of the universe.  So you'd have to ask yourself as the
computer that created the rule set of failing to achieve goals causes
depression.

Personally I just choose not to fail. If I do, then I accept that it was I
that set the standards -- perhaps to do something about it later.

Do they give a model for understanding why we feel proud when we are
encouraged by our parents?

As a child you give power to your parents. So when your parents encourage
you, they hold the belief that you will feel happy, and so you do -- being a
child is giving others the responsibility for their environment.  Many
mortal Homo Sapiens can be considered children in that sense.

So if you could imagine all mathematical expressions as a 3d fabric, where
sentient creatures are droplets or sets of these mathematical
expressions.  You can envision two parents sharing a similar space in the
fabric (at least time/location)  and they form another droplet between
the two of them. A sort of seeding of consciousness.

It is possible to create this kind of mathematical fabric. I think it
would be very intersteing if we could figure out how, as then we would be
able to map Homo Sapiens as well as other related conceptual species, maybe
even figure out how to cross the belief barriers to access them.

I'm not really sure what such a belief fabric would consist of.  Though it
is possible that we could just make a large database of beliefs in some
logical language (Lojban) and  have people describe their own beliefs, then
we would be able to expand this if we got it onto a distributed network.  If
we get some people that believe they are aliens, or have significantly
different beliefs and implications than we do, we could make a claim to
first contact.

*shrugs* it would be relatively simple to implement.  Only concievable issue
is lack of Lojban speakers.

coding isn't useless, especially on the small scale where you grasp what is
happening. When you can no longer grasp what is happening, things are
random which is a sign of intelligence -- you couldn't predict my reply,
and hence it was random.  Though you could just as easily control your
reality by keeping a record of the things you believe and changing them when
you want a change.

An interesting thing to try out would be to have a set of beliefs/statements
(perhaps that you want the computer to have) then you have a purely random
number generator to select a belief at random to output.  You could also add
beliefs/statements to the file by saying them.  Could probably have a
relatively intelligent conversation with the computer.  Typically will reply
with what you expect it to.




On 2/20/07, Bo Morgan [EMAIL PROTECTED] wrote:



On Tue, 20 Feb 2007, Richard Loosemore wrote:

) Chuck Esterbrook wrote:
)  On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
)   Language is the manipulation of symbols.  When you think of how a
)   non-linguistic proto-human species first started using language, you
can
)   imagine creatures associating sounds with images -- oog is the big
hairy
)   red ape who's always trying to steal your women.  akk is the
action of
)   hitting him with a club.
)  
)   The symbol, the sound, is associated with a sensorimotor
pattern.  The
)   visual pattern is the big hairy red ape you know, and the motor
pattern is
)   the sequence of muscle activations that swing the club.
) 
)  Regarding imagine creatures associating sounds with images, I
)  imagine there being a concept node in between. The sound and the
)  image lead to this node and stimulation of the node stimulates the
)  associated patterns. My inspiration comes from this:
)  http://www.newscientist.com/article.ns?id=dn7567
)
) Chuck,
)
) I'm glad you brought that article to my attention, I somehow missed
it.  Be
) warned: the result is extremely dubious, IMO.
)
) Just ask yourself what is the probability that the researchers just
happened
) to come across the neurons that encoded the particular pictures they
showed to
) their subjects.
)
) The probability is ludicrously small.  They were probably hitting
something
) that was *part* of a temporary representation of most recently seen
things.
) Within the context of most recently seen things that neuron could
easily
) have triggered only to (say) the Halle Berry concept.  But if they had
come
) back the next day, it would probably have triggered on 

Re: [agi] The Missing Piece

2007-02-20 Thread Andrii (lOkadin) Zvorygin

The key to life the universe and everything:

All things can be expressed using any Universal Computer

You are a Universal Computer (one that can read(remmember/imagine),
write(experience), erase(forget)).

All the things you believe/know/understand are true.


I believe the key to AI rests in the definition.

Artificial Intelligence.

What is Intelligence? How do we know that there are other intelligent
beings?

I view other intelligent beings as those that communicate to me in a
semi-random fashion. Ones restrained by certain rules, but who also have a
certain degree of variability of expression.

For example:

When I communicate to someone on IRC I expect that they will communicate
back to me in whatever languages are acceptable on the 'channel'.

If I speak to a 'bot' then it also behaves in the manner I would expect a
'bot' to behave.

Once I talked to a person on a chat room. They passed my turing test. Later
someone on the same channel told me that they were a bot. Later I spoke to
that same person and they were a bot -- did not pass my turing test, acted
like a bot.

The Key To Intelligence:

As long as you believe this creature or algorithm to be intelligent. It
is.

If you've written enough code that you think you deserve or  have a
working AI. Then you do. The key here is to convince yourself well enough
that you can prove it to other parts of yourself such as other people.


This may be hard to swallow. It may actually not even be legible to you.

But the key to intelligence, is allowing for enough randomness to arise from
the computer that you believe it is intelligent.


You live in a nano-technology universe. Currently you are going through a
simulation of the pre-singularity environment on Homo Sapien Sapien earth.

Everything you are not aware of is just nano-fog.  You generate the brain
Science and Math by believing in it and giving it the Power of Randomness
to give you new information.


If you want, you can try it on some real live humans. Make them bots.

I've done so successfully. So have you.  Do you remmember now?

If you don't believe me. Then what I'm saying is false.

If you do believe me. Then what I'm saying is true.

This is Science, it is falsifiable :D.





On 2/20/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:


On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
 Language is the manipulation of symbols.  When you think of how a
 non-linguistic proto-human species first started using language, you can
 imagine creatures associating sounds with images -- oog is the big
hairy
 red ape who's always trying to steal your women.  akk is the action of
 hitting him with a club.

 The symbol, the sound, is associated with a sensorimotor pattern.  The
 visual pattern is the big hairy red ape you know, and the motor pattern
is
 the sequence of muscle activations that swing the club.

Regarding imagine creatures associating sounds with images, I
imagine there being a concept node in between. The sound and the
image lead to this node and stimulation of the node stimulates the
associated patterns. My inspiration comes from this:
http://www.newscientist.com/article.ns?id=dn7567

Ben G, in Novemente's system, are there concept nodes that bind all
the associations of concepts together? Or are concepts entirely
distributed among nodes?

 In order to use these symbols effectively, you have to have a
sensorimotor
 image or pattern that the symbols are attached to.  That's what I'm
getting
 at.  That is thought.

AI gives the interesting possibility of having brains that have
entirely different senses, like the traffic on a network. I don't mean
that the AI reads a network diagnostic report like humans would, but
that the traffic stats are inputs just as light is an input into our
retina which leads straight to nerves and computation.

So the input domain doesn't have to be 3D physical space. Although
obviously that would be a requirement for any AI working in physical
space. That's also pretty ambitious and compute intensive.

I think there could be value in finding less compute-intensive input
domains to explore abstract thought formation. Stock market data is
always a tantalizing one.  :-)

 We already know how to get computers to carry out very complex logical
 calculations, but it's mechanical, it's not thought, and they can't
navigate
 themselves (with any serious competence) around a playground.

Also, they can't think abstractly, create analogies (in a complex
environment) or alter their thought processes in the face of
challenging problems. Just wanted to throw those out there.

 Language and logical intelligence is built on visual-spatial modeling.

But does it have to be? Couldn't concepts like causation, correlation,
modeling and prediction, planning, evaluation and feedback apply to a
situation that is neither visual nor spatial (in the 3D physical
sense), like optimizing network traffic?

 I don't have it all figured out right now, but this is what I'm working
on.

Welcome to 

Re: [agi] The Missing Piece

2007-02-20 Thread Andrii (lOkadin) Zvorygin

I've actually been in really different universes. Where you could write text
and it would do as you instructed. I tried checking out the filesystem but
it was barren and bin was empty *shrugs*.

Like I said, You don't have to believe me if you don't want to.  I am but
another one of your creations God.

You are God btw. You do Know that don't you?

I am your servant, please have mercy!

I only meant to please.

On 2/20/07, Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:


The key to life the universe and everything:

All things can be expressed using any Universal Computer

You are a Universal Computer (one that can read(remmember/imagine),
write(experience), erase(forget)).

All the things you believe/know/understand are true.


I believe the key to AI rests in the definition.

Artificial Intelligence.

What is Intelligence? How do we know that there are other intelligent
beings?

I view other intelligent beings as those that communicate to me in a
semi-random fashion. Ones restrained by certain rules, but who also have a
certain degree of variability of expression.

For example:

When I communicate to someone on IRC I expect that they will communicate
back to me in whatever languages are acceptable on the 'channel'.

If I speak to a 'bot' then it also behaves in the manner I would expect a
'bot' to behave.

Once I talked to a person on a chat room. They passed my turing test.
Later someone on the same channel told me that they were a bot. Later I
spoke to that same person and they were a bot -- did not pass my turing
test, acted like a bot.

The Key To Intelligence:

As long as you believe this creature or algorithm to be intelligent.
It is.

If you've written enough code that you think you deserve or  have a
working AI. Then you do. The key here is to convince yourself well enough
that you can prove it to other parts of yourself such as other people.


This may be hard to swallow. It may actually not even be legible to you.

But the key to intelligence, is allowing for enough randomness to arise
from the computer that you believe it is intelligent.


You live in a nano-technology universe. Currently you are going through a
simulation of the pre-singularity environment on Homo Sapien Sapien earth.

Everything you are not aware of is just nano-fog.  You generate the brain
Science and Math by believing in it and giving it the Power of Randomness
to give you new information.


If you want, you can try it on some real live humans. Make them bots.

I've done so successfully. So have you.  Do you remmember now?

If you don't believe me. Then what I'm saying is false.

If you do believe me. Then what I'm saying is true.

This is Science, it is falsifiable :D.





On 2/20/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:

 On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
  Language is the manipulation of symbols.  When you think of how a
  non-linguistic proto-human species first started using language, you
 can
  imagine creatures associating sounds with images -- oog is the big
 hairy
  red ape who's always trying to steal your women.  akk is the action
 of
  hitting him with a club.
 
  The symbol, the sound, is associated with a sensorimotor pattern.  The
  visual pattern is the big hairy red ape you know, and the motor
 pattern is
  the sequence of muscle activations that swing the club.

 Regarding imagine creatures associating sounds with images, I
 imagine there being a concept node in between. The sound and the
 image lead to this node and stimulation of the node stimulates the
 associated patterns. My inspiration comes from this:
 http://www.newscientist.com/article.ns?id=dn7567

 Ben G, in Novemente's system, are there concept nodes that bind all
 the associations of concepts together? Or are concepts entirely
 distributed among nodes?

  In order to use these symbols effectively, you have to have a
 sensorimotor
  image or pattern that the symbols are attached to.  That's what I'm
 getting
  at.  That is thought.

 AI gives the interesting possibility of having brains that have
 entirely different senses, like the traffic on a network. I don't mean
 that the AI reads a network diagnostic report like humans would, but
 that the traffic stats are inputs just as light is an input into our
 retina which leads straight to nerves and computation.

 So the input domain doesn't have to be 3D physical space. Although
 obviously that would be a requirement for any AI working in physical
 space. That's also pretty ambitious and compute intensive.

 I think there could be value in finding less compute-intensive input
 domains to explore abstract thought formation. Stock market data is
 always a tantalizing one.  :-)

  We already know how to get computers to carry out very complex logical
  calculations, but it's mechanical, it's not thought, and they can't
 navigate
  themselves (with any serious competence) around a playground.

 Also, they can't think abstractly, create analogies (in a complex
 

Re: [agi] The Missing Piece

2007-02-20 Thread Bo Morgan

On Tue, 20 Feb 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
)  On Tue, 20 Feb 2007, Richard Loosemore wrote:
)  
)  In regard to your comments about complexity theory: from what I understand,
)  it is primarily about taking simple physics models and trying to explain
)  complicated datasets by recognizing these simple models.  These simple
)  complexity theory patterns can be found in complicated datasets for the
)  purpose of inference, but do they get us closer to human thought?
) 
) Uh, no:  this is a misunderstanding of what complexity is about.  The point of
) complexity is that some types of (extremely nonlinear) systems can show
) interesting regularities in high-level descriptions of their behavior, but [it
) has been postulated that] there is no tractable theory that will ever be able
) to relate the observed high-level regularities to the low-level mechanisms
) that drive the system.  The high level behavior is not random, but you cannot
) explain it using the kind of analytic approaches that work with simple [sic]
) physical systems.
) 
) This is a huge topic, and I think we're talking past each other:  you may want
) to go read up on it (Mitchell Waldrop's book is a good, though non-technical
) introduction to the idea).

Okay.  Thanks for the pointer.  I'm very interested in simple and easily 
understood ideas. :)  They make easy-to-understand theories.

)  Do they tell us what grief is doing when a loved one dies?
)  Do these inference system tell us why we get depressed when we keep
)  failing to accomplish our goals?
)  Do they give a model for understanding why we feel proud when we are
)  encouraged by our parents?
)  
)  These questions are trying to get at some of the most powerful thought
)  processes in humans.
) 
) If you are attacking the ability of simple logical inference systems to
) cover these topics, I kind of agree with you.  But you are diving into some
) very complicated, high-level stuff there.  Nothing wrong with that in
) principle, but these are deep waters.  Your examples are all about the
) motivational/emotional system.  I have many ideas about how that is
) implemented, so you can rest assured that I, at least, am not ignoring them.
) (And, again: I *am* taking a complex systems approach).
) 
) Can't speak for anyone else, though.
) 
) 
) Richard Loosemore
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?list_id=303
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-20 Thread Ben Goertzel

Richard Loosemore wrote:

Ben Goertzel wrote:


 It's pretty clear that humans don't run FOPC as a native code, but 
that we can learn it as a trick.   


I disagree.  I think that Hebbian learning between cortical columns 
is essentially equivalent to basic probabilistic term logic.


Lower-level common-sense inferencing of the Clyde--elephant--gray 
type falls out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...


The original suggestion was (IIANM) that humans don't run FOPC as a 
native code emat the level of symbols and concepts/em (i.e. the 
concept-stuff that we humans can talk about because we have 
introspective access at that level of our systems).


Now, if you are going to claim that spike-timing-dependent LTP between 
columns is where some probabilistic term logic is happening ON 
SYMBOLS, then what you have to do is buy into a story about where 
symbols are represented and how.  I am not clear about whether you are 
suggesting that the symbols are represented at:


(1) the column level, or
(2) the neuron level, or
(3) the dendritic branch level, or
(4) the synapse level, or (perhaps)
(5) the spike-train level (i.e. spike trains encode symbol patterns).

If you think that the logical machinery is visible, can you say which 
of these levels is the one where you see it?


None of the above -- at least not exactly.  I think that symbols are 
probably represented, in the brain, as dynamical patterns in the 
neuronal network.  Not strange attractors exactly -- more like 
strange transients, which behave like strange attractors but only for 
a certain period of time (possibly related to Mikhail Zak's terminal 
attractors).   However, I think that in some cases an individual column 
(or more rarely, an individual neuron) can play a key role in one of 
these symbol-embodying strange-transients. 

So, for example, suppose Columns C1, C2, C3 are closely associated with 
symbol-embodying strange transients T1, T2,  T3.


Suppose there are highly conductive synaptic bundles going in the 
directions


C1 -- C2
C2 -- C3

Then, Hebbian learning may result in the potentiation of the synaptic 
bundle going


C1 -- C3

Now, we may analyze the relationships between the strange transients T1, 
T2, T3 using Markov chains, where a high-weight link between T1 and 
T2, for example, means that P(T2|T1) is large.


Then, the above Hebbian learning example will lead to the heuristic 
inference


P(T2 | T1) is large
P(T3 | T2) is large
|-
P(T3 | T1) is large

But this is probabilistic term logic deduction (and comes with specific 
quantitative formulas that I am not giving here).


One can make similar analyses for other probabilistic logic rules. 

Basically, one can ground probabilistic inference on Markov 
probabilities between strange-transients of the neural network, in 
Hebbian learning on synaptic bundles between cortical columns.


And that is (in very sketchy form, obviously) part of my hypothesis 
about how the brain may ground symbolic logic in neurodynamics.


The subtler part of my hypothesis attempts to explain how higher-order 
functions and quantified logical relationships may be grounded in 
neurodynamics.  But I don't really want to post that on a list before 
publishing it formally in a scientific journal, as it's a bigger and 
also more complex idea.


This is not how Novamente works -- Novamente is not a neural net 
architecture.  However, Novamente does include some similar ideas.  In 
Novamente lingo, the strange transients mentioned above are called 
maps, and the role of the Hebbian learning mentioned above is played 
in NM by explicit probabilistic term logic.


So, according to my view,

In the brain: lower-level Hebbian learning on bundles of links btw 
neuronal clusters, leads to implicit probabilistic inference on 
strange-transients representing concepts


In Novamente: explicit heuristic/probabilistic inference on links btw 
nodes in NM's hypergraph datastructure, lead to implicit probabilistic 
inference on strange-transients (called maps) representing concepts


So, the Novamente approach seeks to retain the 
creativity/fluidity-supportive emergence of the brain's approach, while 
still utilizing a form of probabilistic logic rather than neuron 
emulations on the lower level.  This subtlety causes many people to 
misunderstand the Novamente architecture, because they only think about 
the lower level rather than the emergent, map level.   In terms of our 
practical Novamente work we have not done much with the map level yet, 
but we know this is going to be the crux of the system's AGI capability.


-- Ben



As I see it, ALL of these choices have their problems.  In other 
words, if the machinery of logical reasoning is actually visible to 
you in the naked hardware at any of these levels, I reckon that you 
must then commit to some 

[agi] The Missing Piece

2007-02-19 Thread John Scanlon
Is there anyone out there who has a sense that most of the work being done in 
AI is still following the same track that has failed for fifty years now?  The 
focus on logic as thought, or neural nets as the bottom-up, brain-imitating 
solution just isn't getting anywhere?  It's the same thing, and it's never 
getting anywhere.

The missing component is thought.  What is thought, and how do human beings 
think?  There is no reason that thought cannot be implemented in a sufficiently 
powerful computing machine -- the problem is how to implement it.

Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket calculator.

Human intelligence is based on animal intelligence.  We can perform logical 
calculations because we can see the symbols and their relations and move the 
symbols around in our minds to produce the results, but the intelligence is not 
the symbol manipulation, but our ability to see the relationships spatialy and 
decide if the pieces fit correctly throught the process.

The world is continuous, spatiotemporal, and non-descrete, and simply is not 
describable in logical terms.  A true AI system has to model the world in the 
same way -- spatiotemporal sensorimotor maps.  Animal intelligence.

This is short, and doesn't express my ideas in much detail.  But I've been 
working alone for a long time now, and I think I have to find some people to 
talk to.  I have an AGI project I've been developing, but I can't do it all by 
myself.  If anyone has questions about what alternative ideas I have to the 
logical paradigm, I can clarify much further, as far as I can.  I would just 
like to maybe make some connections and find some people who aren't stuck in 
the computational, symbolic mode.

Ask some questions, and I'll tell you what I think.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Bo Morgan

On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being 
) done in AI is still following the same track that has failed for fifty 
) years now?  The focus on logic as thought, or neural nets as the 
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch 
balls.  Visual perception and motor control for solving this task was 
first shown in a limited context in the 1960s.  You are correct that the 
bottom up approach is not a theory driven approach.  People talk about 
mystical words, such as Emergence or Complexity, in order to explain how 
their very simple model of mind can ultimately think like a human.  
Top-down design of an A.I. requires a theory of what abstract thought 
processes do.

) The missing component is thought.  What is thought, and how do human 
) beings think?  There is no reason that thought cannot be implemented in 
) a sufficiently powerful computing machine -- the problem is how to 
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't 
worry too much about trying to define Thought.  It has different 
definitions depending on the different problem solving contexts that it is 
used.  If you focus on making a machine solve problems, then you might see 
some part of the machine you build will resemble your many uses for the 
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol 
) manipulation that can can be programmed into any scientific pocket 
) calculator.

Logical deduction is only one way to think.  As you say, there are many 
other ways to think.  Some of these are simple reactive processes, while 
others are more deliberative and form multistep plans, while still others 
are reflective and react to problems in actual planning and inference 
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human 
intelligence is not necessarily a simple subsumption of animal 
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is 
) not describable in logical terms.  A true AI system has to model the 
) world in the same way -- spatiotemporal sensorimotor maps.  Animal 
) intelligence.

Logical parts of the world are describable in logical terms.  We think in 
many different ways.  Each of these ways uses different representations of 
the world.  We have many specific solutions to specific types of problem 
solving, but to make a general problem solver we need ways to map these 
representations from one specific problem solver to another.  This allows 
alternatives to pursue when a specific problem solver gets stuck.  This 
type of robust problem solving requires reasoning by analogy.

) Ask some questions, and I'll tell you what I think.

People always have a lot to say, but what we need more of are working 
algorithms and demonstrations of robust problem solving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore

John Scanlon wrote:
Is there anyone out there who has a sense that most of the work being 
done in AI is still following the same track that has failed for fifty 
years now?  The focus on logic as thought, or neural nets as the 
bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
the same thing, and it's never getting anywhere.
 
The missing component is thought.  What is thought, and how do human 
beings think?  There is no reason that thought cannot be implemented in 
a sufficiently powerful computing machine -- the problem is how to 
implement it.
 
Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket 
calculator.
 
Human intelligence is based on animal intelligence.  We can perform 
logical calculations because we can see the symbols and their relations 
and move the symbols around in our minds to produce the results, but the 
intelligence is not the symbol manipulation, but our ability to see the 
relationships spatialy and decide if the pieces fit correctly throught 
the process.
 
The world is continuous, spatiotemporal, and non-descrete, and simply is 
not describable in logical terms.  A true AI system has to model the 
world in the same way -- spatiotemporal sensorimotor maps.  Animal 
intelligence.
 
This is short, and doesn't express my ideas in much detail.  But I've 
been working alone for a long time now, and I think I have to find some 
people to talk to.  I have an AGI project I've been developing, but I 
can't do it all by myself.  If anyone has questions about what 
alternative ideas I have to the logical paradigm, I can clarify much 
further, as far as I can.  I would just like to maybe make some 
connections and find some people who aren't stuck in the computational, 
symbolic mode.
 
Ask some questions, and I'll tell you what I think.


John,

I have *some* sympathy for what you say, but I am not sure I can buy the 
commitment to spatiotemporal maps and animal intelligence, because 
there are many ways to build a mind that do not use symbolic logic, 
without on the other hand insisting that everything is continuous.  You 
can have discrete symbols, but with internal structure, for example.


This is kind of a big, wie open topic, so it might be better for you to 
write out an essay about what you have in mind when you imagine an 
alternative approach.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Cenny Wenner

On 2/19/07, Bo Morgan [EMAIL PROTECTED] wrote:



On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being
) done in AI is still following the same track that has failed for fifty
) years now?  The focus on logic as thought, or neural nets as the
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch
balls.  Visual perception and motor control for solving this task was
first shown in a limited context in the 1960s.  You are correct that the
bottom up approach is not a theory driven approach.  People talk about
mystical words, such as Emergence or Complexity, in order to explain how
their very simple model of mind can ultimately think like a human.
Top-down design of an A.I. requires a theory of what abstract thought
processes do.

) The missing component is thought.  What is thought, and how do human
) beings think?  There is no reason that thought cannot be implemented in
) a sufficiently powerful computing machine -- the problem is how to
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't
worry too much about trying to define Thought.  It has different
definitions depending on the different problem solving contexts that it is
used.  If you focus on making a machine solve problems, then you might see
some part of the machine you build will resemble your many uses for the
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol
) manipulation that can can be programmed into any scientific pocket
) calculator.

Logical deduction is only one way to think.  As you say, there are many
other ways to think.  Some of these are simple reactive processes, while
others are more deliberative and form multistep plans, while still others
are reflective and react to problems in actual planning and inference
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human
intelligence is not necessarily a simple subsumption of animal
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is
) not describable in logical terms.  A true AI system has to model the
) world in the same way -- spatiotemporal sensorimotor maps.  Animal
) intelligence.

Logical parts of the world are describable in logical terms.  We think in
many different ways.  Each of these ways uses different representations of
the world.  We have many specific solutions to specific types of problem
solving, but to make a general problem solver we need ways to map these
representations from one specific problem solver to another.  This allows
alternatives to pursue when a specific problem solver gets stuck.  This
type of robust problem solving requires reasoning by analogy.



I hope my ignorance does not bother this list too much.

Regarding what or what may not be done through logical inference and other
expressive enough symbolic approaches; given unlimited resources would it
not be possible to implement an UTM with at most a finite overhead which in
turn yields that any algorithm running on an UTM could also run on
expressive enough symbolic systems, whether they learn or not? I do not
argue that it is not inefficient, both for running and implementation speed.
It's even so that the logical inference in such a case may be reduced
entirely and proven to be more efficiently obviously, than to implement the
system direcly on certain systems. I do not think however that such a strict
and not well-formulated position is rationally justified since it's not
clear (at least not to me) that the logical inference may be efficiently
reduced for every algorithm expressed in the logical language. Just rambling
and unrelated but perhaps the brain's operations do not even allow for UTMs
since they are not so clear and there might not be appropriate
transformations and if assume the Turing-Church thesis we might find that
there are problems that artificial components may solve that humans cannot
even given unlimited resources. Perhaps not very likely since we can
simulate the process of an UTM by hand and even the errors may be corrected
given enough time.

) Ask some questions, and I'll tell you what I think.


People always have a lot to say, but what we need more of are working
algorithms and demonstrations of robust problem solving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Ben Goertzel


 It's pretty clear that humans don't 
run FOPC as a native code, but that we can learn it as a trick. 
  


I disagree.  I think that Hebbian learning between cortical columns is 
essentially equivalent to basic probabilistic
term logic. 

Lower-level common-sense inferencing of the Clyde--elephant--gray type falls 
out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread J. Storrs Hall, PhD.
On Monday 19 February 2007 16:08, Ben Goertzel wrote:
   It's pretty clear that humans don't
  run FOPC as a native code, but that we can learn it as a trick.

 I disagree.  I think that Hebbian learning between cortical columns is
 essentially equivalent to basic probabilistic
 term logic.

That's a tantalizing hint (not that I haven't been floating a few of my 
own :-). I tend to think of my n-D spaces as representing what a column 
does... CSG is exactly propositional logic if you think of each point as a 
proposition. It's the mappings between spaces that are the tricky part and 
give you the equivalent power of predicates, but not in just that form.

I haven't looked it, but I'd bet that Hebbian learning is within hollering 
distance of some of my associative clustering operations, on a conceptual 
level.

I wouldn't try to get NM to represent general knowledge in this way, 
but, for representing knowledge
about the physical environment and things observed and projected 
therein, having such operations to
act on 3D manifolds would be quite valuable

True, but I'm envisioning going up to 1-D in some cases. The key problem, 
vis-a-vis a system that uses symbols as a base representation, is where do 
the symbols come from? My idea is to generalize operations that do 
recognition (e.g. of shapes, phonemes) from raw sense data (lots of nerve 
signals) -- and then to use the same operations all the way up, to form 
higher-level concepts from patterns of lower-level ones. 

Once you have symbols, i.e. once you've carved the world into concepts, things 
get a lot more straightforward.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Mystical Emergence/Complexity [WAS Re: [agi] The Missing Piece]

2007-02-19 Thread Richard Loosemore

Bo Morgan wrote:

On Mon, 19 Feb 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
)
)  On Mon, 19 Feb 2007, John Scanlon wrote:
)  
)  ) Is there anyone out there who has a sense that most of the work being
)  ) done in AI is still following the same track that has failed for 
)  ) fifty years now?  The focus on logic as thought, or neural nets as the
)  ) bottom-up, brain-imitating solution just isn't getting anywhere?  
)  ) It's the same thing, and it's never getting anywhere.
)  
)  Yes, they are mostly building robots and trying to pick up blocks or catch

)  balls.  Visual perception and motor control for solving this task was first
)  shown in a limited context in the 1960s.  You are correct that the bottom up
)  approach is not a theory driven approach.  People talk about mystical words,
)  such as Emergence or Complexity, in order to explain how their very simple
)  model of mind can ultimately think like a human.  Top-down design of an A.I.
)  requires a theory of what abstract thought processes do.
) 
) It is interesting that you would say this.
) 
) My first reaction was to simply declare that I completely disagree with your

) ...mystical words, such as Emergence or Complexity... comments, but that
) would not have been very constructive.
) 
) I am more interested in *why* you would say that.  What approaches do you have

) in mind, that are lacking in theory?  Who, of all the researchers you had in
) mind, are the ones you most consider to be using those words in a mystical
) way?

I think that describing the ways that humans solve problems will help us 
to understand how they are intelligent.  If we have a sufficient 
description of how humans solve problems then we will have a theory of 
how humans solve problems.  For example, answers to these questions:


How do children attach to their parents and not strangers?
How do children learn morals and values?
How do children learn how to stack blocks?
How do children do visual analogy completion problems?
How do parents feel anxious when they hear their child crying?
Why do our mental processes seem so simple when they are very intricate 
  processes of control, such as making a turn while walking.

How do we learn new ways to learn how to think?
How do we reflect on our planning mistakes in order to make a better plan 
  next time?


We need to describe these processes and view the architecture of human 
thinking from an implementation point of view.  I think that too many 
people are focusing on simple components that learn to do very simple 
tasks, such as recognizing handwriting characters or answering questions 
such as Is there an animal in this picture?.


I disagree with an approach that has solved a simple problem and then 
claims that by massive scaling, massive parallelism, a humanly intelligent 
thinking process will Emerge.


) More pointedly, would you be able to give a statement of what *they* would
) claim was their most definitive, non-mystical statement of the meaning of
) terms like complexity or emergence, and could you explain why you feel
) that, neverthless, they are saying nothing beyond the vague and mystical?

One example of Emergence would be a recurrent neural network that has a 
given number of stable oscillating states.  People use these stable 
oscillating states instead of using symbols.  They invent recurrent neural 
networks that can transition from one symbol to the next.  This is fine 
work, but we already have symbols and the ability to actually describe 
human thought in symbolic systems.  RNNs have their time and place, but 
focusing solely on them is a bottom-up approach without a larger theory of 
mind.  Without a larger theory of how humans think these networks will not 
become humanly intelligent magically.


) I ask this in a genuine spirit of inquiry:  I am puzzled as to why people say
) this stuff about complexity, because it has a very, very clear, non-mystical
) meaning.  But maybe you are using those words to refer to something different
) than what I mean  so I am trying to find out.

I'm not saying that complexity is ill-defined.  I'm saying that people 
make a leap such as: Humans are complex systems, which as far as I 
understand is roughly equivalent to the statement Humans have a lot of 
degrees of freedom.  They use this statement to draw an analogy between a 
human mind and a neural network with a billion nodes with no description 
of any organizing structure.  What are a few hundred computational 
elements that a neural network would need to implement?  These are the 
answers to the questions above.


That was a surprise:  the things that you were referring to when you
used the words emergence and complexity are in fact very different
from the meanings that a lot of others use, especially when they are
making the mystical processes criticism.  Your beef is not the same as
theirs, by a long way.

I work on a complex systems approach to cognition, but from my point of
view I am 

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore

Ben Goertzel wrote:


 It's pretty clear that humans don't run FOPC as a native code, but 
that we can learn it as a trick.   


I disagree.  I think that Hebbian learning between cortical columns is 
essentially equivalent to basic probabilistic term logic.


Lower-level common-sense inferencing of the Clyde--elephant--gray 
type falls out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...


The original suggestion was (IIANM) that humans don't run FOPC as a 
native code emat the level of symbols and concepts/em (i.e. the 
concept-stuff that we humans can talk about because we have 
introspective access at that level of our systems).


Now, if you are going to claim that spike-timing-dependent LTP between 
columns is where some probabilistic term logic is happening ON SYMBOLS, 
then what you have to do is buy into a story about where symbols are 
represented and how.  I am not clear about whether you are suggesting 
that the symbols are represented at:


(1) the column level, or
(2) the neuron level, or
(3) the dendritic branch level, or
(4) the synapse level, or (perhaps)
(5) the spike-train level (i.e. spike trains encode symbol patterns).

If you think that the logical machinery is visible, can you say which of 
these levels is the one where you see it?


As I see it, ALL of these choices have their problems.  In other words, 
if the machinery of logical reasoning is actually visible to you in the 
naked hardware at any of these levels, I reckon that you must then 
commit to some description of how symbols are implemented, and I think 
all of them look like bad news.


THAT is why, each time the subject is mentioned, I pull a 
sucking-on-lemons face and start bad-mouthing the neuroscientists.  ;-)


I don't mind there being some logic-equivalent machinery down there, but 
I think it would be strictly sub-cognitive, and not relevant to normal 
human reasoning at all ..  and what I find frustrating is that (some 
of) the people who talk about it seem to think that they only have to 
find *something* in the neural hardware that can be mapped onto 
*something* like symbol-manipulation/logical reasoning, and they think 
they are half way home and dry, without stopping to consider the other 
implications of the symbols being encoded at that hardware-dependent 
level.  I haven't seen any neuroscientists who talk that way show any 
indication that they have a clue that there are even problems with it, 
let alone that they have good answers to those problems.


In other words, I don't think I buy it.


Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Anna Taylor

Sorry, I was slow to read.
Working on a thought is what makes it maybe one day a realtiy.

Nice post. Thanks.
Anna:)

On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:

Eliezer S. Yudkowsky wrote:

John Scanlon wrote:
 Is there anyone out there who has a sense that most of the work being
 done in AI is still following the same track that has failed for fifty
 years now?  The focus on logic as thought, or neural nets as the
 bottom-up, brain-imitating solution just isn't getting anywhere?  It's
 the same thing, and it's never getting anywhere.

 The missing component is thought.  What is thought, and how do human
 beings think?  There is no reason that thought cannot be implemented in
 a sufficiently powerful computing machine -- the problem is how to
 implement it.

No, that's not it.  I know because I once built a machine with thoughts
in it and it still didn't work.  Do you have any other ideas?


Okay, that was a nice, quick dismissive statement.  And you're right -- just
insert the element of thought, and voila you have intelligence, or in the
case of the machine you once built -- nothing.  That's not what I mean.

I've read some of your stuff, and you know a lot more about computer science
and science in general than I may ever know.

I don't mean that the missing ingredient is simply the mystical idea of
thought.  I mean that thought is something different than calculation.
Human intelligence is built on animal intelligence -- and what I mean by
that is that there was animal intelligence, the same kind of intelligence
that can be seen today in apes, before the development of language that was
the substrate that allowed the use of language.

Language is the manipulation of symbols.  When you think of how a
non-linguistic proto-human species first started using language, you can
imagine creatures associating sounds with images -- oog is the big hairy
red ape who's always trying to steal your women.  akk is the action of
hitting him with a club.

The symbol, the sound, is associated with a sensorimotor pattern.  The
visual pattern is the big hairy red ape you know, and the motor pattern is
the sequence of muscle activations that swing the club.

In order to use these symbols effectively, you have to have a sensorimotor
image or pattern that the symbols are attached to.  That's what I'm getting
at.  That is thought.

We already know how to get computers to carry out very complex logical
calculations, but it's mechanical, it's not thought, and they can't navigate
themselves (with any serious competence) around a playground.

Language and logical intelligence is built on visual-spatial modeling.
That's why children learn their ABC's by looking at letters drawn on a
chalkboard and practicing the muscle movements to draw them on paper.

I think that the key to AI is to implement this sensorimotor, spatiotemporal
modeling in software.  That means data structures that represent the world
in three spatial dimensions and one temporal dimension.  This modeling can
be done.  It's done every day in video games.  But obviously that's not
enough.  There is the element of probability -- what usually occurs, what
might occur, and how my actions might affect what might occur.

Okay -- so what I am focused on is creating data structures that can take
sensorimotor patterns and put them into a knowledge-representation system
that can remember events, predict events, and predict how motor actions will
affect events.  And it is all represented in terms of sensorimotor images or
maps.

I don't have it all figured out right now, but this is what I'm working on.

  - Original Message -
  From: Eliezer S. Yudkowsky
  To: agi@v2.listbox.com
  Sent: Monday, February 19, 2007 9:12 PM
  Subject: Re: [agi] The Missing Piece


  John Scanlon wrote:
   Is there anyone out there who has a sense that most of the work being
   done in AI is still following the same track that has failed for fifty
   years now?  The focus on logic as thought, or neural nets as the
   bottom-up, brain-imitating solution just isn't getting anywhere?  It's
   the same thing, and it's never getting anywhere.
  
   The missing component is thought.  What is thought, and how do human
   beings think?  There is no reason that thought cannot be implemented in
   a sufficiently powerful computing machine -- the problem is how to
   implement it.

  No, that's not it.  I know because I once built a machine with thoughts
  in it and it still didn't work.  Do you have any other ideas?

  --
  Eliezer S. Yudkowsky  http://singinst.org/
  Research Fellow, Singularity Institute for Artificial Intelligence

  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303