[agi] The Missing Piece

2007-02-19 Thread John Scanlon
Is there anyone out there who has a sense that most of the work being done in 
AI is still following the same track that has failed for fifty years now?  The 
focus on logic as thought, or neural nets as the bottom-up, brain-imitating 
solution just isn't getting anywhere?  It's the same thing, and it's never 
getting anywhere.

The missing component is thought.  What is thought, and how do human beings 
think?  There is no reason that thought cannot be implemented in a sufficiently 
powerful computing machine -- the problem is how to implement it.

Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket calculator.

Human intelligence is based on animal intelligence.  We can perform logical 
calculations because we can see the symbols and their relations and move the 
symbols around in our minds to produce the results, but the intelligence is not 
the symbol manipulation, but our ability to see the relationships spatialy and 
decide if the pieces fit correctly throught the process.

The world is continuous, spatiotemporal, and non-descrete, and simply is not 
describable in logical terms.  A true AI system has to model the world in the 
same way -- spatiotemporal sensorimotor maps.  Animal intelligence.

This is short, and doesn't express my ideas in much detail.  But I've been 
working alone for a long time now, and I think I have to find some people to 
talk to.  I have an AGI project I've been developing, but I can't do it all by 
myself.  If anyone has questions about what alternative ideas I have to the 
logical paradigm, I can clarify much further, as far as I can.  I would just 
like to maybe make some connections and find some people who aren't stuck in 
the computational, symbolic mode.

Ask some questions, and I'll tell you what I think.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-19 Thread Ricardo Barreira

On 2/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:

You might check out D ( http://www.digitalmars.com/d/index.html ).  Mind
you, it's still in the quite early days, and missing a lot of libraries
... which means you need to construct interfaces to the C versions.
Still, it answers several of your objections, and has partial answers to
at least one of the others.


I was going to try out D some time ago, but decided not to when I
learned that they use Hans Boehm's conservative garbage collector. I
find conservative garbage collection to be very inelegant and too
error prone for my taste, even if it works well in practice for most
projects...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Larry Page, Google: We have some people at Google (who) are really trying to build artificial intelligence...

2007-02-19 Thread Kingma, D.P.

Larry Page, Google co-founder: We have some people at Google (who) are
really trying to build artificial intelligence and to do it on a large
scale, Page said to a packed Hilton ballroom of scientists. It's not as
far off as people think.

link:
http://news.com.com/2100-11395_3-6160372.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Bo Morgan

On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being 
) done in AI is still following the same track that has failed for fifty 
) years now?  The focus on logic as thought, or neural nets as the 
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch 
balls.  Visual perception and motor control for solving this task was 
first shown in a limited context in the 1960s.  You are correct that the 
bottom up approach is not a theory driven approach.  People talk about 
mystical words, such as Emergence or Complexity, in order to explain how 
their very simple model of mind can ultimately think like a human.  
Top-down design of an A.I. requires a theory of what abstract thought 
processes do.

) The missing component is thought.  What is thought, and how do human 
) beings think?  There is no reason that thought cannot be implemented in 
) a sufficiently powerful computing machine -- the problem is how to 
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't 
worry too much about trying to define Thought.  It has different 
definitions depending on the different problem solving contexts that it is 
used.  If you focus on making a machine solve problems, then you might see 
some part of the machine you build will resemble your many uses for the 
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol 
) manipulation that can can be programmed into any scientific pocket 
) calculator.

Logical deduction is only one way to think.  As you say, there are many 
other ways to think.  Some of these are simple reactive processes, while 
others are more deliberative and form multistep plans, while still others 
are reflective and react to problems in actual planning and inference 
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human 
intelligence is not necessarily a simple subsumption of animal 
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is 
) not describable in logical terms.  A true AI system has to model the 
) world in the same way -- spatiotemporal sensorimotor maps.  Animal 
) intelligence.

Logical parts of the world are describable in logical terms.  We think in 
many different ways.  Each of these ways uses different representations of 
the world.  We have many specific solutions to specific types of problem 
solving, but to make a general problem solver we need ways to map these 
representations from one specific problem solver to another.  This allows 
alternatives to pursue when a specific problem solver gets stuck.  This 
type of robust problem solving requires reasoning by analogy.

) Ask some questions, and I'll tell you what I think.

People always have a lot to say, but what we need more of are working 
algorithms and demonstrations of robust problem solving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Development Environments for AI (a few non-religious comments!)

2007-02-19 Thread Richard Loosemore


Wow, I leave off email for two days and a 55-message Religious War 
breaks out!  ;-)



I promise this is nothing to do with languages I do or do not like (i.e. 
it is non-religious...).



As many people pointed out, programming language matters a good deal 
less that what you are going to use it for.  In my case I am very clear 
about what I want to do, and it is very different from conventional AI.


My own goals are to build an entire software development environment, as 
I said earlier, and the main reasons for this are:


1) I am working on a conceptual framework for developing a *class* of AI 
systems [NB:  a class of systems, not just one system], and the best way 
to express a framework is by instantiating that framework in the form 
of a tool that allows systems within that framework to be constructed 
easily.


2) My intention is to do systematic experiments to investigate the 
behavior of systems within that class, so I need some way to easily do 
this systematic experimentation.  I want, for example, to construct a 
particular mechanism and then look at the behavior of many variants of 
that mechanism.  So, for example, a concept-learning mechanism that 
involves a parameter governing the number of daughter concepts that are 
grabbed in an abstraction event ... and I might be intersted in how the 
mechanism behaves when the number of daughters is 2, 3, 4, 5, or some 
random number in the vicinity of one of those).  I need a tool that will 
let me quickly set up such simulation experiments without having to 
touch any low level code.


3) One reason that is almost tangential to AI itself, though related:  I 
believe that conventional environments and languages are built by people 
who think like engineers, and do not have a good understanding of how a 
mind works when it is trying to comprehend the enormous complexity of 
computational systems.  [I know, I said that in combative language:  but 
try not to flame me just because I said it assertively ;-)].  So I am 
trying to use psychological principles to make the process of system 
design and programming into a task that does not constantly trap the 
designer/programmer into the most stupid of errors.  I have a number of 
ideas in this respect, but since I am talking to some people about 
funding this project right now, I'd rather not go into detail.


4) I need particular primitives that are simply not available in 
conventional languages.  The biggest example is a facility for massive 
asymmetric parallelism that is not going to fall flat on its face all 
the time (with deadlocks and livelocks).  I realise that everyone and 
their grandmother would like to do massive parallel programming without 
all the usuall headaches, and that the general problem is horrendous... 
but I can actually solve the problem in my context because I do not have 
to create a general solution to the problem.  There is a restriction in 
my case that enables me to get away without having to solve the general 
problem.  Again, apologies for coyness:  possible patent pending and all 
that.





Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore

John Scanlon wrote:
Is there anyone out there who has a sense that most of the work being 
done in AI is still following the same track that has failed for fifty 
years now?  The focus on logic as thought, or neural nets as the 
bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
the same thing, and it's never getting anywhere.
 
The missing component is thought.  What is thought, and how do human 
beings think?  There is no reason that thought cannot be implemented in 
a sufficiently powerful computing machine -- the problem is how to 
implement it.
 
Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket 
calculator.
 
Human intelligence is based on animal intelligence.  We can perform 
logical calculations because we can see the symbols and their relations 
and move the symbols around in our minds to produce the results, but the 
intelligence is not the symbol manipulation, but our ability to see the 
relationships spatialy and decide if the pieces fit correctly throught 
the process.
 
The world is continuous, spatiotemporal, and non-descrete, and simply is 
not describable in logical terms.  A true AI system has to model the 
world in the same way -- spatiotemporal sensorimotor maps.  Animal 
intelligence.
 
This is short, and doesn't express my ideas in much detail.  But I've 
been working alone for a long time now, and I think I have to find some 
people to talk to.  I have an AGI project I've been developing, but I 
can't do it all by myself.  If anyone has questions about what 
alternative ideas I have to the logical paradigm, I can clarify much 
further, as far as I can.  I would just like to maybe make some 
connections and find some people who aren't stuck in the computational, 
symbolic mode.
 
Ask some questions, and I'll tell you what I think.


John,

I have *some* sympathy for what you say, but I am not sure I can buy the 
commitment to spatiotemporal maps and animal intelligence, because 
there are many ways to build a mind that do not use symbolic logic, 
without on the other hand insisting that everything is continuous.  You 
can have discrete symbols, but with internal structure, for example.


This is kind of a big, wie open topic, so it might be better for you to 
write out an essay about what you have in mind when you imagine an 
alternative approach.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Cenny Wenner

On 2/19/07, Bo Morgan [EMAIL PROTECTED] wrote:



On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being
) done in AI is still following the same track that has failed for fifty
) years now?  The focus on logic as thought, or neural nets as the
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch
balls.  Visual perception and motor control for solving this task was
first shown in a limited context in the 1960s.  You are correct that the
bottom up approach is not a theory driven approach.  People talk about
mystical words, such as Emergence or Complexity, in order to explain how
their very simple model of mind can ultimately think like a human.
Top-down design of an A.I. requires a theory of what abstract thought
processes do.

) The missing component is thought.  What is thought, and how do human
) beings think?  There is no reason that thought cannot be implemented in
) a sufficiently powerful computing machine -- the problem is how to
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't
worry too much about trying to define Thought.  It has different
definitions depending on the different problem solving contexts that it is
used.  If you focus on making a machine solve problems, then you might see
some part of the machine you build will resemble your many uses for the
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol
) manipulation that can can be programmed into any scientific pocket
) calculator.

Logical deduction is only one way to think.  As you say, there are many
other ways to think.  Some of these are simple reactive processes, while
others are more deliberative and form multistep plans, while still others
are reflective and react to problems in actual planning and inference
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human
intelligence is not necessarily a simple subsumption of animal
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is
) not describable in logical terms.  A true AI system has to model the
) world in the same way -- spatiotemporal sensorimotor maps.  Animal
) intelligence.

Logical parts of the world are describable in logical terms.  We think in
many different ways.  Each of these ways uses different representations of
the world.  We have many specific solutions to specific types of problem
solving, but to make a general problem solver we need ways to map these
representations from one specific problem solver to another.  This allows
alternatives to pursue when a specific problem solver gets stuck.  This
type of robust problem solving requires reasoning by analogy.



I hope my ignorance does not bother this list too much.

Regarding what or what may not be done through logical inference and other
expressive enough symbolic approaches; given unlimited resources would it
not be possible to implement an UTM with at most a finite overhead which in
turn yields that any algorithm running on an UTM could also run on
expressive enough symbolic systems, whether they learn or not? I do not
argue that it is not inefficient, both for running and implementation speed.
It's even so that the logical inference in such a case may be reduced
entirely and proven to be more efficiently obviously, than to implement the
system direcly on certain systems. I do not think however that such a strict
and not well-formulated position is rationally justified since it's not
clear (at least not to me) that the logical inference may be efficiently
reduced for every algorithm expressed in the logical language. Just rambling
and unrelated but perhaps the brain's operations do not even allow for UTMs
since they are not so clear and there might not be appropriate
transformations and if assume the Turing-Church thesis we might find that
there are problems that artificial components may solve that humans cannot
even given unlimited resources. Perhaps not very likely since we can
simulate the process of an UTM by hand and even the errors may be corrected
given enough time.

) Ask some questions, and I'll tell you what I think.


People always have a lot to say, but what we need more of are working
algorithms and demonstrations of robust problem solving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Re: Languages for AGI

2007-02-19 Thread J. Storrs Hall, PhD.
On Sunday 18 February 2007 19:22, Ricardo Barreira wrote:

 You can spend all the time you want sharpening your axes, it'll do you
 no good if you don't know what you'll use it for...

True enough.  However, as I've also mentioned in this venue before, I want to 
be able to do general associative retrieval, interpolation, and extrapolation 
of time-varying trajectories of manifolds in n-dimensional spaces, and 
constructive solid geometry between them. I'm guessing that's about halfway 
to AI -- i.e. the amount of coding needed, with that as a primitive, needed 
for AI is about as much as required to get that from current programming 
tools.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-19 Thread Lukasz Kaiser

Hi,

I was offline and missed the large discussion so let me just add my 2c:


Cobra is currently at a late alpha stage. There are some docs
(including a comparison to Python) and examples. (And pardon my plain
looking web site, but I have no graphics skills.) Here it is:
http://cobralang.com/


Nice :). You might want to check another open-source .Net language
called Nemerle (nemerle.org). It is quite stable now, reasonably efficient
and has bindings to some IDEs (VS, monodevelop). It is majorly
a functional language and not that python-like, but it has a special
option that allows you to switch to python-like syntax (white-space
and newline delimiters, etc.). And it has very nice lisp-like macros :).


Far and away, the best answer to the best language question is the .NET
framework.  If you're using the framework, you can use any language that has
been implemented on the framework (which includes everything from C# to the
OCAML-like F# and nearly every language in between -- those obviously many
implementations are better than others) AND you can easily intermix
languages (so the answer to best language will vary from piece to piece).


Unluckily, after being involved in .Net for quite some time, I do not
share your optimism. In fact I came to think that .Net is not suitable
for anything that requires really high performance and parallelism.
Perhaps the problem is just that it is very very hard to build a really
good VM and probably impossible to build one that will be good for
more than one programming paradigm. As long as you do imperative
OO programming .Net might be ok and your comments about mixing
languages are right. But if you start doing functional and  generative
programming it will be a pain and a performance bottleneck. In that case
you need things like MetaOCaml (www.metaocaml.org) for generative
programming or OCamlP3l for easy parallelism (ocamlp3l.inria.fr/eng.htm).

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Re: Languages for AGI

2007-02-19 Thread Ben Goertzel

J. Storrs Hall, PhD. wrote:

On Sunday 18 February 2007 19:22, Ricardo Barreira wrote:

  

You can spend all the time you want sharpening your axes, it'll do you
no good if you don't know what you'll use it for...



True enough.  However, as I've also mentioned in this venue before, I want to 
be able to do general associative retrieval, interpolation, and extrapolation 
of time-varying trajectories of manifolds in n-dimensional spaces, and 
constructive solid geometry between them. 
BTW, if you do make efficient tools for this, we could certainly use 
them within Novamente -- though

perhaps for a more limited purpose than what you envision.

I wouldn't try to get NM to represent general knowledge in this way, 
but, for representing knowledge
about the physical environment and things observed and projected 
therein, having such operations to

act on 3D manifolds would be quite valuable

ben

I'm guessing that's about halfway 
to AI -- i.e. the amount of coding needed, with that as a primitive, needed 
for AI is about as much as required to get that from current programming 
tools.


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Ben Goertzel


 It's pretty clear that humans don't 
run FOPC as a native code, but that we can learn it as a trick. 
  


I disagree.  I think that Hebbian learning between cortical columns is 
essentially equivalent to basic probabilistic
term logic. 

Lower-level common-sense inferencing of the Clyde--elephant--gray type falls 
out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread J. Storrs Hall, PhD.
On Monday 19 February 2007 16:08, Ben Goertzel wrote:
   It's pretty clear that humans don't
  run FOPC as a native code, but that we can learn it as a trick.

 I disagree.  I think that Hebbian learning between cortical columns is
 essentially equivalent to basic probabilistic
 term logic.

That's a tantalizing hint (not that I haven't been floating a few of my 
own :-). I tend to think of my n-D spaces as representing what a column 
does... CSG is exactly propositional logic if you think of each point as a 
proposition. It's the mappings between spaces that are the tricky part and 
give you the equivalent power of predicates, but not in just that form.

I haven't looked it, but I'd bet that Hebbian learning is within hollering 
distance of some of my associative clustering operations, on a conceptual 
level.

I wouldn't try to get NM to represent general knowledge in this way, 
but, for representing knowledge
about the physical environment and things observed and projected 
therein, having such operations to
act on 3D manifolds would be quite valuable

True, but I'm envisioning going up to 1-D in some cases. The key problem, 
vis-a-vis a system that uses symbols as a base representation, is where do 
the symbols come from? My idea is to generalize operations that do 
recognition (e.g. of shapes, phonemes) from raw sense data (lots of nerve 
signals) -- and then to use the same operations all the way up, to form 
higher-level concepts from patterns of lower-level ones. 

Once you have symbols, i.e. once you've carved the world into concepts, things 
get a lot more straightforward.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Mystical Emergence/Complexity [WAS Re: [agi] The Missing Piece]

2007-02-19 Thread Richard Loosemore

Bo Morgan wrote:

On Mon, 19 Feb 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
)
)  On Mon, 19 Feb 2007, John Scanlon wrote:
)  
)  ) Is there anyone out there who has a sense that most of the work being
)  ) done in AI is still following the same track that has failed for 
)  ) fifty years now?  The focus on logic as thought, or neural nets as the
)  ) bottom-up, brain-imitating solution just isn't getting anywhere?  
)  ) It's the same thing, and it's never getting anywhere.
)  
)  Yes, they are mostly building robots and trying to pick up blocks or catch

)  balls.  Visual perception and motor control for solving this task was first
)  shown in a limited context in the 1960s.  You are correct that the bottom up
)  approach is not a theory driven approach.  People talk about mystical words,
)  such as Emergence or Complexity, in order to explain how their very simple
)  model of mind can ultimately think like a human.  Top-down design of an A.I.
)  requires a theory of what abstract thought processes do.
) 
) It is interesting that you would say this.
) 
) My first reaction was to simply declare that I completely disagree with your

) ...mystical words, such as Emergence or Complexity... comments, but that
) would not have been very constructive.
) 
) I am more interested in *why* you would say that.  What approaches do you have

) in mind, that are lacking in theory?  Who, of all the researchers you had in
) mind, are the ones you most consider to be using those words in a mystical
) way?

I think that describing the ways that humans solve problems will help us 
to understand how they are intelligent.  If we have a sufficient 
description of how humans solve problems then we will have a theory of 
how humans solve problems.  For example, answers to these questions:


How do children attach to their parents and not strangers?
How do children learn morals and values?
How do children learn how to stack blocks?
How do children do visual analogy completion problems?
How do parents feel anxious when they hear their child crying?
Why do our mental processes seem so simple when they are very intricate 
  processes of control, such as making a turn while walking.

How do we learn new ways to learn how to think?
How do we reflect on our planning mistakes in order to make a better plan 
  next time?


We need to describe these processes and view the architecture of human 
thinking from an implementation point of view.  I think that too many 
people are focusing on simple components that learn to do very simple 
tasks, such as recognizing handwriting characters or answering questions 
such as Is there an animal in this picture?.


I disagree with an approach that has solved a simple problem and then 
claims that by massive scaling, massive parallelism, a humanly intelligent 
thinking process will Emerge.


) More pointedly, would you be able to give a statement of what *they* would
) claim was their most definitive, non-mystical statement of the meaning of
) terms like complexity or emergence, and could you explain why you feel
) that, neverthless, they are saying nothing beyond the vague and mystical?

One example of Emergence would be a recurrent neural network that has a 
given number of stable oscillating states.  People use these stable 
oscillating states instead of using symbols.  They invent recurrent neural 
networks that can transition from one symbol to the next.  This is fine 
work, but we already have symbols and the ability to actually describe 
human thought in symbolic systems.  RNNs have their time and place, but 
focusing solely on them is a bottom-up approach without a larger theory of 
mind.  Without a larger theory of how humans think these networks will not 
become humanly intelligent magically.


) I ask this in a genuine spirit of inquiry:  I am puzzled as to why people say
) this stuff about complexity, because it has a very, very clear, non-mystical
) meaning.  But maybe you are using those words to refer to something different
) than what I mean  so I am trying to find out.

I'm not saying that complexity is ill-defined.  I'm saying that people 
make a leap such as: Humans are complex systems, which as far as I 
understand is roughly equivalent to the statement Humans have a lot of 
degrees of freedom.  They use this statement to draw an analogy between a 
human mind and a neural network with a billion nodes with no description 
of any organizing structure.  What are a few hundred computational 
elements that a neural network would need to implement?  These are the 
answers to the questions above.


That was a surprise:  the things that you were referring to when you
used the words emergence and complexity are in fact very different
from the meanings that a lot of others use, especially when they are
making the mystical processes criticism.  Your beef is not the same as
theirs, by a long way.

I work on a complex systems approach to cognition, but from my point of
view I am 

Re: [agi] The Missing Piece

2007-02-19 Thread Richard Loosemore

Ben Goertzel wrote:


 It's pretty clear that humans don't run FOPC as a native code, but 
that we can learn it as a trick.   


I disagree.  I think that Hebbian learning between cortical columns is 
essentially equivalent to basic probabilistic term logic.


Lower-level common-sense inferencing of the Clyde--elephant--gray 
type falls out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...


The original suggestion was (IIANM) that humans don't run FOPC as a 
native code emat the level of symbols and concepts/em (i.e. the 
concept-stuff that we humans can talk about because we have 
introspective access at that level of our systems).


Now, if you are going to claim that spike-timing-dependent LTP between 
columns is where some probabilistic term logic is happening ON SYMBOLS, 
then what you have to do is buy into a story about where symbols are 
represented and how.  I am not clear about whether you are suggesting 
that the symbols are represented at:


(1) the column level, or
(2) the neuron level, or
(3) the dendritic branch level, or
(4) the synapse level, or (perhaps)
(5) the spike-train level (i.e. spike trains encode symbol patterns).

If you think that the logical machinery is visible, can you say which of 
these levels is the one where you see it?


As I see it, ALL of these choices have their problems.  In other words, 
if the machinery of logical reasoning is actually visible to you in the 
naked hardware at any of these levels, I reckon that you must then 
commit to some description of how symbols are implemented, and I think 
all of them look like bad news.


THAT is why, each time the subject is mentioned, I pull a 
sucking-on-lemons face and start bad-mouthing the neuroscientists.  ;-)


I don't mind there being some logic-equivalent machinery down there, but 
I think it would be strictly sub-cognitive, and not relevant to normal 
human reasoning at all ..  and what I find frustrating is that (some 
of) the people who talk about it seem to think that they only have to 
find *something* in the neural hardware that can be mapped onto 
*something* like symbol-manipulation/logical reasoning, and they think 
they are half way home and dry, without stopping to consider the other 
implications of the symbols being encoded at that hardware-dependent 
level.  I haven't seen any neuroscientists who talk that way show any 
indication that they have a clue that there are even problems with it, 
let alone that they have good answers to those problems.


In other words, I don't think I buy it.


Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-19 Thread Anna Taylor

Richard Loosemore wrote:
There is a restriction in my case that enables me to get away without
having to solve the general problem.

I am curious to know what that restriction is?  Offlist would be welcomed.
Thanks
Anna:)




On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:


Wow, I leave off email for two days and a 55-message Religious War
breaks out!  ;-)


I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).


As many people pointed out, programming language matters a good deal
less that what you are going to use it for.  In my case I am very clear
about what I want to do, and it is very different from conventional AI.

My own goals are to build an entire software development environment, as
I said earlier, and the main reasons for this are:

1) I am working on a conceptual framework for developing a *class* of AI
systems [NB:  a class of systems, not just one system], and the best way
to express a framework is by instantiating that framework in the form
of a tool that allows systems within that framework to be constructed
easily.

2) My intention is to do systematic experiments to investigate the
behavior of systems within that class, so I need some way to easily do
this systematic experimentation.  I want, for example, to construct a
particular mechanism and then look at the behavior of many variants of
that mechanism.  So, for example, a concept-learning mechanism that
involves a parameter governing the number of daughter concepts that are
grabbed in an abstraction event ... and I might be intersted in how the
mechanism behaves when the number of daughters is 2, 3, 4, 5, or some
random number in the vicinity of one of those).  I need a tool that will
let me quickly set up such simulation experiments without having to
touch any low level code.

3) One reason that is almost tangential to AI itself, though related:  I
believe that conventional environments and languages are built by people
who think like engineers, and do not have a good understanding of how a
mind works when it is trying to comprehend the enormous complexity of
computational systems.  [I know, I said that in combative language:  but
try not to flame me just because I said it assertively ;-)].  So I am
trying to use psychological principles to make the process of system
design and programming into a task that does not constantly trap the
designer/programmer into the most stupid of errors.  I have a number of
ideas in this respect, but since I am talking to some people about
funding this project right now, I'd rather not go into detail.

4) I need particular primitives that are simply not available in
conventional languages.  The biggest example is a facility for massive
asymmetric parallelism that is not going to fall flat on its face all
the time (with deadlocks and livelocks).  I realise that everyone and
their grandmother would like to do massive parallel programming without
all the usuall headaches, and that the general problem is horrendous...
but I can actually solve the problem in my context because I do not have
to create a general solution to the problem.  There is a restriction in
my case that enables me to get away without having to solve the general
problem.  Again, apologies for coyness:  possible patent pending and all
that.




Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Anna Taylor

Sorry, I was slow to read.
Working on a thought is what makes it maybe one day a realtiy.

Nice post. Thanks.
Anna:)

On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:

Eliezer S. Yudkowsky wrote:

John Scanlon wrote:
 Is there anyone out there who has a sense that most of the work being
 done in AI is still following the same track that has failed for fifty
 years now?  The focus on logic as thought, or neural nets as the
 bottom-up, brain-imitating solution just isn't getting anywhere?  It's
 the same thing, and it's never getting anywhere.

 The missing component is thought.  What is thought, and how do human
 beings think?  There is no reason that thought cannot be implemented in
 a sufficiently powerful computing machine -- the problem is how to
 implement it.

No, that's not it.  I know because I once built a machine with thoughts
in it and it still didn't work.  Do you have any other ideas?


Okay, that was a nice, quick dismissive statement.  And you're right -- just
insert the element of thought, and voila you have intelligence, or in the
case of the machine you once built -- nothing.  That's not what I mean.

I've read some of your stuff, and you know a lot more about computer science
and science in general than I may ever know.

I don't mean that the missing ingredient is simply the mystical idea of
thought.  I mean that thought is something different than calculation.
Human intelligence is built on animal intelligence -- and what I mean by
that is that there was animal intelligence, the same kind of intelligence
that can be seen today in apes, before the development of language that was
the substrate that allowed the use of language.

Language is the manipulation of symbols.  When you think of how a
non-linguistic proto-human species first started using language, you can
imagine creatures associating sounds with images -- oog is the big hairy
red ape who's always trying to steal your women.  akk is the action of
hitting him with a club.

The symbol, the sound, is associated with a sensorimotor pattern.  The
visual pattern is the big hairy red ape you know, and the motor pattern is
the sequence of muscle activations that swing the club.

In order to use these symbols effectively, you have to have a sensorimotor
image or pattern that the symbols are attached to.  That's what I'm getting
at.  That is thought.

We already know how to get computers to carry out very complex logical
calculations, but it's mechanical, it's not thought, and they can't navigate
themselves (with any serious competence) around a playground.

Language and logical intelligence is built on visual-spatial modeling.
That's why children learn their ABC's by looking at letters drawn on a
chalkboard and practicing the muscle movements to draw them on paper.

I think that the key to AI is to implement this sensorimotor, spatiotemporal
modeling in software.  That means data structures that represent the world
in three spatial dimensions and one temporal dimension.  This modeling can
be done.  It's done every day in video games.  But obviously that's not
enough.  There is the element of probability -- what usually occurs, what
might occur, and how my actions might affect what might occur.

Okay -- so what I am focused on is creating data structures that can take
sensorimotor patterns and put them into a knowledge-representation system
that can remember events, predict events, and predict how motor actions will
affect events.  And it is all represented in terms of sensorimotor images or
maps.

I don't have it all figured out right now, but this is what I'm working on.

  - Original Message -
  From: Eliezer S. Yudkowsky
  To: agi@v2.listbox.com
  Sent: Monday, February 19, 2007 9:12 PM
  Subject: Re: [agi] The Missing Piece


  John Scanlon wrote:
   Is there anyone out there who has a sense that most of the work being
   done in AI is still following the same track that has failed for fifty
   years now?  The focus on logic as thought, or neural nets as the
   bottom-up, brain-imitating solution just isn't getting anywhere?  It's
   the same thing, and it's never getting anywhere.
  
   The missing component is thought.  What is thought, and how do human
   beings think?  There is no reason that thought cannot be implemented in
   a sufficiently powerful computing machine -- the problem is how to
   implement it.

  No, that's not it.  I know because I once built a machine with thoughts
  in it and it still didn't work.  Do you have any other ideas?

  --
  Eliezer S. Yudkowsky  http://singinst.org/
  Research Fellow, Singularity Institute for Artificial Intelligence

  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-19 Thread Chuck Esterbrook

On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Wow, I leave off email for two days and a 55-message Religious War
breaks out!  ;-)

I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).

As many people pointed out, programming language matters a good deal
less that what you are going to use it for.  In my case I am very clear
about what I want to do, and it is very different from conventional AI.

My own goals are to build an entire software development environment, as
I said earlier, and the main reasons for this are:

1) I am working on a conceptual framework for developing a *class* of AI
systems [NB:  a class of systems, not just one system], and the best way
to express a framework is by instantiating that framework in the form
of a tool that allows systems within that framework to be constructed
easily.


Can't comment on this one as it's too high level for me to do so.


2) My intention is to do systematic experiments to investigate the
behavior of systems within that class, so I need some way to easily do
this systematic experimentation.  I want, for example, to construct a
particular mechanism and then look at the behavior of many variants of
that mechanism.  So, for example, a concept-learning mechanism that
involves a parameter governing the number of daughter concepts that are
grabbed in an abstraction event ... and I might be intersted in how the
mechanism behaves when the number of daughters is 2, 3, 4, 5, or some
random number in the vicinity of one of those).  I need a tool that will
let me quickly set up such simulation experiments without having to
touch any low level code.


I've done this for financial analysis and genetic algorithm projects
that had parameters that could be varied.

It can be glued on to just about any system. Define your parameters by
name, type, required-or-not and optionally (when applicable) min and
max. Then provide some code that reads the parameter definitions and
does (at least) the following:
* complains about violations (missing value, value of out range)
* interprets looping values like a (start, stop, step) for numeric
parameters or an (a, b, c) for enums or strings
* executes the program with each combination of values, storing the
parameter sets with the results

The inputs could be done via a text file that is parsed and
interpreted. And/or a web or gui form could be generated from the
defs.

My real point is that you don't really need a new dev env for this.


3) One reason that is almost tangential to AI itself, though related:  I
believe that conventional environments and languages are built by people
who think like engineers, and do not have a good understanding of how a
mind works when it is trying to comprehend the enormous complexity of
computational systems.  [I know, I said that in combative language:  but
try not to flame me just because I said it assertively ;-)].  So I am
trying to use psychological principles to make the process of system
design and programming into a task that does not constantly trap the
designer/programmer into the most stupid of errors.  I have a number of
ideas in this respect, but since I am talking to some people about
funding this project right now, I'd rather not go into detail.


This is the most interesting point in your list. Too bad we can't get
the details yet.  :-)

I don't know what such an environment would look like, but I don't see
why it couldn't exist. Developers have to keep a lot of stuff in their
head as they work on a project and I'm positive that current IDEs
aren't doing as much as they could to help visualize, manage and
develop a project. Not nearly as much as they could!

I sincerely wish you luck as I'd like to take such an environment for a drive.


4) I need particular primitives that are simply not available in
conventional languages.  The biggest example is a facility for massive
asymmetric parallelism that is not going to fall flat on its face all
the time (with deadlocks and livelocks).  I realise that everyone and
their grandmother would like to do massive parallel programming without
all the usuall headaches, and that the general problem is horrendous...
but I can actually solve the problem in my context because I do not have
to create a general solution to the problem.  There is a restriction in
my case that enables me to get away without having to solve the general
problem.  Again, apologies for coyness:  possible patent pending and all
that.


That feels likes something that can be done via a library, although I
appreciate that some things can only be done at the language level or
are simply best done there. (And perhaps at the environment level or
some mix of all of these.)

Feel free to keep us informed of any technical and business developments...

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303