Re: [agi] The Missing Piece

2007-02-20 Thread Andrii (lOkadin) Zvorygin

The key to life the universe and everything:

All things can be expressed using any Universal Computer

You are a Universal Computer (one that can read(remmember/imagine),
write(experience), erase(forget)).

All the things you believe/know/understand are true.


I believe the key to AI rests in the definition.

Artificial Intelligence.

What is Intelligence? How do we know that there are other intelligent
beings?

I view other intelligent beings as those that communicate to me in a
semi-random fashion. Ones restrained by certain rules, but who also have a
certain degree of variability of expression.

For example:

When I communicate to someone on IRC I expect that they will communicate
back to me in whatever languages are acceptable on the 'channel'.

If I speak to a 'bot' then it also behaves in the manner I would expect a
'bot' to behave.

Once I talked to a person on a chat room. They passed my turing test. Later
someone on the same channel told me that they were a bot. Later I spoke to
that same person and they were a bot -- did not pass my turing test, acted
like a bot.

The Key To Intelligence:

As long as you believe this creature or algorithm to be intelligent. It
is.

If you've written enough code that you think you deserve or  have a
working AI. Then you do. The key here is to convince yourself well enough
that you can prove it to other parts of yourself such as other people.


This may be hard to swallow. It may actually not even be legible to you.

But the key to intelligence, is allowing for enough randomness to arise from
the computer that you believe it is intelligent.


You live in a nano-technology universe. Currently you are going through a
simulation of the pre-singularity environment on Homo Sapien Sapien earth.

Everything you are not aware of is just nano-fog.  You generate the brain
Science and Math by believing in it and giving it the Power of Randomness
to give you new information.


If you want, you can try it on some real live humans. Make them bots.

I've done so successfully. So have you.  Do you remmember now?

If you don't believe me. Then what I'm saying is false.

If you do believe me. Then what I'm saying is true.

This is Science, it is falsifiable :D.





On 2/20/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:


On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
 Language is the manipulation of symbols.  When you think of how a
 non-linguistic proto-human species first started using language, you can
 imagine creatures associating sounds with images -- oog is the big
hairy
 red ape who's always trying to steal your women.  akk is the action of
 hitting him with a club.

 The symbol, the sound, is associated with a sensorimotor pattern.  The
 visual pattern is the big hairy red ape you know, and the motor pattern
is
 the sequence of muscle activations that swing the club.

Regarding imagine creatures associating sounds with images, I
imagine there being a concept node in between. The sound and the
image lead to this node and stimulation of the node stimulates the
associated patterns. My inspiration comes from this:
http://www.newscientist.com/article.ns?id=dn7567

Ben G, in Novemente's system, are there concept nodes that bind all
the associations of concepts together? Or are concepts entirely
distributed among nodes?

 In order to use these symbols effectively, you have to have a
sensorimotor
 image or pattern that the symbols are attached to.  That's what I'm
getting
 at.  That is thought.

AI gives the interesting possibility of having brains that have
entirely different senses, like the traffic on a network. I don't mean
that the AI reads a network diagnostic report like humans would, but
that the traffic stats are inputs just as light is an input into our
retina which leads straight to nerves and computation.

So the input domain doesn't have to be 3D physical space. Although
obviously that would be a requirement for any AI working in physical
space. That's also pretty ambitious and compute intensive.

I think there could be value in finding less compute-intensive input
domains to explore abstract thought formation. Stock market data is
always a tantalizing one.  :-)

 We already know how to get computers to carry out very complex logical
 calculations, but it's mechanical, it's not thought, and they can't
navigate
 themselves (with any serious competence) around a playground.

Also, they can't think abstractly, create analogies (in a complex
environment) or alter their thought processes in the face of
challenging problems. Just wanted to throw those out there.

 Language and logical intelligence is built on visual-spatial modeling.

But does it have to be? Couldn't concepts like causation, correlation,
modeling and prediction, planning, evaluation and feedback apply to a
situation that is neither visual nor spatial (in the 3D physical
sense), like optimizing network traffic?

 I don't have it all figured out right now, but this is what I'm working
on.

Welcome to 

Re: [agi] The Missing Piece

2007-02-20 Thread Andrii (lOkadin) Zvorygin

I've actually been in really different universes. Where you could write text
and it would do as you instructed. I tried checking out the filesystem but
it was barren and bin was empty *shrugs*.

Like I said, You don't have to believe me if you don't want to.  I am but
another one of your creations God.

You are God btw. You do Know that don't you?

I am your servant, please have mercy!

I only meant to please.

On 2/20/07, Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:


The key to life the universe and everything:

All things can be expressed using any Universal Computer

You are a Universal Computer (one that can read(remmember/imagine),
write(experience), erase(forget)).

All the things you believe/know/understand are true.


I believe the key to AI rests in the definition.

Artificial Intelligence.

What is Intelligence? How do we know that there are other intelligent
beings?

I view other intelligent beings as those that communicate to me in a
semi-random fashion. Ones restrained by certain rules, but who also have a
certain degree of variability of expression.

For example:

When I communicate to someone on IRC I expect that they will communicate
back to me in whatever languages are acceptable on the 'channel'.

If I speak to a 'bot' then it also behaves in the manner I would expect a
'bot' to behave.

Once I talked to a person on a chat room. They passed my turing test.
Later someone on the same channel told me that they were a bot. Later I
spoke to that same person and they were a bot -- did not pass my turing
test, acted like a bot.

The Key To Intelligence:

As long as you believe this creature or algorithm to be intelligent.
It is.

If you've written enough code that you think you deserve or  have a
working AI. Then you do. The key here is to convince yourself well enough
that you can prove it to other parts of yourself such as other people.


This may be hard to swallow. It may actually not even be legible to you.

But the key to intelligence, is allowing for enough randomness to arise
from the computer that you believe it is intelligent.


You live in a nano-technology universe. Currently you are going through a
simulation of the pre-singularity environment on Homo Sapien Sapien earth.

Everything you are not aware of is just nano-fog.  You generate the brain
Science and Math by believing in it and giving it the Power of Randomness
to give you new information.


If you want, you can try it on some real live humans. Make them bots.

I've done so successfully. So have you.  Do you remmember now?

If you don't believe me. Then what I'm saying is false.

If you do believe me. Then what I'm saying is true.

This is Science, it is falsifiable :D.





On 2/20/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:

 On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote:
  Language is the manipulation of symbols.  When you think of how a
  non-linguistic proto-human species first started using language, you
 can
  imagine creatures associating sounds with images -- oog is the big
 hairy
  red ape who's always trying to steal your women.  akk is the action
 of
  hitting him with a club.
 
  The symbol, the sound, is associated with a sensorimotor pattern.  The
  visual pattern is the big hairy red ape you know, and the motor
 pattern is
  the sequence of muscle activations that swing the club.

 Regarding imagine creatures associating sounds with images, I
 imagine there being a concept node in between. The sound and the
 image lead to this node and stimulation of the node stimulates the
 associated patterns. My inspiration comes from this:
 http://www.newscientist.com/article.ns?id=dn7567

 Ben G, in Novemente's system, are there concept nodes that bind all
 the associations of concepts together? Or are concepts entirely
 distributed among nodes?

  In order to use these symbols effectively, you have to have a
 sensorimotor
  image or pattern that the symbols are attached to.  That's what I'm
 getting
  at.  That is thought.

 AI gives the interesting possibility of having brains that have
 entirely different senses, like the traffic on a network. I don't mean
 that the AI reads a network diagnostic report like humans would, but
 that the traffic stats are inputs just as light is an input into our
 retina which leads straight to nerves and computation.

 So the input domain doesn't have to be 3D physical space. Although
 obviously that would be a requirement for any AI working in physical
 space. That's also pretty ambitious and compute intensive.

 I think there could be value in finding less compute-intensive input
 domains to explore abstract thought formation. Stock market data is
 always a tantalizing one.  :-)

  We already know how to get computers to carry out very complex logical
  calculations, but it's mechanical, it's not thought, and they can't
 navigate
  themselves (with any serious competence) around a playground.

 Also, they can't think abstractly, create analogies (in a complex
 

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Richard Loosemore

Chuck Esterbrook wrote:

On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Wow, I leave off email for two days and a 55-message Religious War
breaks out!  ;-)

I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).

As many people pointed out, programming language matters a good deal
less that what you are going to use it for.  In my case I am very clear
about what I want to do, and it is very different from conventional AI.

My own goals are to build an entire software development environment, as
I said earlier, and the main reasons for this are:

1) I am working on a conceptual framework for developing a *class* of AI
systems [NB:  a class of systems, not just one system], and the best way
to express a framework is by instantiating that framework in the form
of a tool that allows systems within that framework to be constructed
easily.


Can't comment on this one as it's too high level for me to do so.


2) My intention is to do systematic experiments to investigate the
behavior of systems within that class, so I need some way to easily do
this systematic experimentation.  I want, for example, to construct a
particular mechanism and then look at the behavior of many variants of
that mechanism.  So, for example, a concept-learning mechanism that
involves a parameter governing the number of daughter concepts that are
grabbed in an abstraction event ... and I might be intersted in how the
mechanism behaves when the number of daughters is 2, 3, 4, 5, or some
random number in the vicinity of one of those).  I need a tool that will
let me quickly set up such simulation experiments without having to
touch any low level code.


I've done this for financial analysis and genetic algorithm projects
that had parameters that could be varied.

It can be glued on to just about any system. Define your parameters by
name, type, required-or-not and optionally (when applicable) min and
max. Then provide some code that reads the parameter definitions and
does (at least) the following:
* complains about violations (missing value, value of out range)
* interprets looping values like a (start, stop, step) for numeric
parameters or an (a, b, c) for enums or strings
* executes the program with each combination of values, storing the
parameter sets with the results

The inputs could be done via a text file that is parsed and
interpreted. And/or a web or gui form could be generated from the
defs.

My real point is that you don't really need a new dev env for this.


I take your point, though what I did above was try to give the simplest 
example I can think of.


Certainly, all the facilities that I have in mind (wrt automated tests) 
could just be implemented as add-ons to something else.  No doubt about 
it.  But the list of facilities does start to get quite large and 
complex, when you go into the detail, so there needs to be some way to 
organize them all in one tightly-knit context.  After a while, it kind 
of adds up to an environment.


Here are some of the other things that would need to be automated:

1) I want the system to allow a user to point to any structural 
component (within limits) and say vary this.  So then the system would 
look at the definition of that structural component and try to come up 
with suggestions for ways to make variations.  These would be suggested 
to the user, who would then select the ones they wanted to try.  You 
could probably imagine the complexity involved if the user couldn't do 
this, and instead had to look into the parametric definitions for 
structural aspects of the system that were represented graphically, then 
try to manually figure out a way to make sensible variations.  This is 
what you might call a meta-automation level, because the process of 
*choosing* an automated test run is itself being automated.  Quick 
example:  the space within which elements (my term for the active 
processes that represent symbols, or concepts) interact with one 
another, is structured in such a way that there can be bottlenecks where 
different groups of elements (say, phonemes and graphemes) are forced to 
interact in a way that does not allow full connectivity.  This would 
show up in graphical form as two blob-like zones with a narrow bridge 
between them.  The user should be able to select the perimeter line that 
goes around the two blobs and bridge, and see the handles that define 
the shape.  Then say, of the handles in the vicinity of the bottleneck 
show me some variations on this.  Then the system gives a set of 
example variations, with the bottleneck constricted or dilated.


The important thing is that the user should never have to figure out 
where in the code the parameters are that would allow those variations 
to happen:  they should just be able to think, suggest and act.


2) Same as above, but you would want to ask the system for monte carlo 
runs in which it could only select random examples from a space of 
variations, 

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Mark Waser

My real point is that you don't really need a new dev env for this.


   Richard is talking about some *substantial* architecture here -- not 
just a development environment but a *lot* of core library routines (as you 
later speculate) and functionality that is either currently spread across 
many disparate non-communicating research platforms or doesn't even exist 
yet.


   A tool that will quickly set up such simulation experiments without 
having to touch any low level code first requires all that low-level code 
to be written/ported to a common development and testing environment --  
which, in itself, requires a design/specification of a development and 
testing environment that can even support all the proposed code.  If you've 
got a small app with all of the specifications set, it's easy to write a 
testing harness.  What we're talking about here though is an environment 
that can 1) store and retrieve via a variety of different mechanisms an 
immense quantity of data, 2) store and retrieve vast amounts of knowledge in 
a variety of different formats (and cross-translating between formats as 
necessary), 3) build knowledge out of data through a variety of different 
mechanisms (probably written in a *wide* variety of programming language 
families including LISP, Prolog, and the functional programming languages 
(OCaml, Haskell, etc.),
4) allow for the tracking of context in all of the preceding operations, 
etc., etc., etc.


   Realistically, you'll have an AGI before the environment is completed . 
. . .


   Personally, I'd start with a commercial extensible development 
environment and a commercial enterprise class datastore and start designing 
and developing from there.  The one thing that I always hammer on Novamente 
for is their decision to build their datastore from scratch.  They do 
awesome research on discovery and learning but they've bled and continue to 
bleed a lot of manpower and time on interacting with a custom, 
non-enterprise class data *core*.  Imagine if they had a development 
environment like Richard is proposing . . . . (and plugged all of their BOA, 
MOSES, etc. into it) . . . .


- Original Message - 
From: Chuck Esterbrook [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 1:40 AM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few 
non-religious comments!)




On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Wow, I leave off email for two days and a 55-message Religious War
breaks out!  ;-)

I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).

As many people pointed out, programming language matters a good deal
less that what you are going to use it for.  In my case I am very clear
about what I want to do, and it is very different from conventional AI.

My own goals are to build an entire software development environment, as
I said earlier, and the main reasons for this are:

1) I am working on a conceptual framework for developing a *class* of AI
systems [NB:  a class of systems, not just one system], and the best way
to express a framework is by instantiating that framework in the form
of a tool that allows systems within that framework to be constructed
easily.


Can't comment on this one as it's too high level for me to do so.


2) My intention is to do systematic experiments to investigate the
behavior of systems within that class, so I need some way to easily do
this systematic experimentation.  I want, for example, to construct a
particular mechanism and then look at the behavior of many variants of
that mechanism.  So, for example, a concept-learning mechanism that
involves a parameter governing the number of daughter concepts that are
grabbed in an abstraction event ... and I might be intersted in how the
mechanism behaves when the number of daughters is 2, 3, 4, 5, or some
random number in the vicinity of one of those).  I need a tool that will
let me quickly set up such simulation experiments without having to
touch any low level code.


I've done this for financial analysis and genetic algorithm projects
that had parameters that could be varied.

It can be glued on to just about any system. Define your parameters by
name, type, required-or-not and optionally (when applicable) min and
max. Then provide some code that reads the parameter definitions and
does (at least) the following:
* complains about violations (missing value, value of out range)
* interprets looping values like a (start, stop, step) for numeric
parameters or an (a, b, c) for enums or strings
* executes the program with each combination of values, storing the
parameter sets with the results

The inputs could be done via a text file that is parsed and
interpreted. And/or a web or gui form could be generated from the
defs.

My real point is that you don't really need a new dev env for this.


3) One reason that is almost tangential to AI itself, though related:  I
believe that conventional 

Re: [agi] The Missing Piece

2007-02-20 Thread Bo Morgan

On Tue, 20 Feb 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
)  On Tue, 20 Feb 2007, Richard Loosemore wrote:
)  
)  In regard to your comments about complexity theory: from what I understand,
)  it is primarily about taking simple physics models and trying to explain
)  complicated datasets by recognizing these simple models.  These simple
)  complexity theory patterns can be found in complicated datasets for the
)  purpose of inference, but do they get us closer to human thought?
) 
) Uh, no:  this is a misunderstanding of what complexity is about.  The point of
) complexity is that some types of (extremely nonlinear) systems can show
) interesting regularities in high-level descriptions of their behavior, but [it
) has been postulated that] there is no tractable theory that will ever be able
) to relate the observed high-level regularities to the low-level mechanisms
) that drive the system.  The high level behavior is not random, but you cannot
) explain it using the kind of analytic approaches that work with simple [sic]
) physical systems.
) 
) This is a huge topic, and I think we're talking past each other:  you may want
) to go read up on it (Mitchell Waldrop's book is a good, though non-technical
) introduction to the idea).

Okay.  Thanks for the pointer.  I'm very interested in simple and easily 
understood ideas. :)  They make easy-to-understand theories.

)  Do they tell us what grief is doing when a loved one dies?
)  Do these inference system tell us why we get depressed when we keep
)  failing to accomplish our goals?
)  Do they give a model for understanding why we feel proud when we are
)  encouraged by our parents?
)  
)  These questions are trying to get at some of the most powerful thought
)  processes in humans.
) 
) If you are attacking the ability of simple logical inference systems to
) cover these topics, I kind of agree with you.  But you are diving into some
) very complicated, high-level stuff there.  Nothing wrong with that in
) principle, but these are deep waters.  Your examples are all about the
) motivational/emotional system.  I have many ideas about how that is
) implemented, so you can rest assured that I, at least, am not ignoring them.
) (And, again: I *am* taking a complex systems approach).
) 
) Can't speak for anyone else, though.
) 
) 
) Richard Loosemore
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?list_id=303
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace

On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote:


Realistically, you'll have an AGI before the environment is completed
.
. . .



I think you slightly underestimate the difficulty of creating AGI ;)

   Personally, I'd start with a commercial extensible development

environment and a commercial enterprise class datastore and start
designing
and developing from there.  The one thing that I always hammer on
Novamente
for is their decision to build their datastore from scratch.



I'm still curious about what advantages you see commercial database systems
having for AI work?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Mark Waser
 I think you slightly underestimate the difficulty of creating AGI ;) 

I think that you grossly underestimate the magnitude of what is being proposed 
because the tag development environment has been attached to it.:-)

Realistically, the development environment is both the AGI's DNA *and* it's 
womb/environment/nutrients.

 I'm still curious about what advantages you see commercial database systems 
 having for AI work?

An AGI is going to want to store, retrieve, and update an immense amount of 
data and knowledge based upon a wide array of criteria.  Many, many really 
smart people have spent many, many man-years designing heavy-duty, 
fault-tolerant systems that do exactly that (with many other capabilities 
besides -- you know, little things like triggers, back-up, set-based 
operations, etc., etc.).  What possible advantage do you see in not leveraging 
all that capability?

Note that I'm not talking about a stupid architecture where you hammer yourself 
flat with nine million, small database calls.  I block move chunks of data in 
and out and would move anything that uses more than a certain amount of data 
into the database itself and operate on it there.


  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Tuesday, February 20, 2007 3:31 PM
  Subject: **SPAM** Re: [agi] Development Environments for AI (a few 
non-religious comments!)


  On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Realistically, you'll have an AGI before the environment is completed .
. . .

  I think you slightly underestimate the difficulty of creating AGI ;) 



Personally, I'd start with a commercial extensible development
environment and a commercial enterprise class datastore and start designing
and developing from there.  The one thing that I always hammer on Novamente 
for is their decision to build their datastore from scratch.

  I'm still curious about what advantages you see commercial database systems 
having for AI work?

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-20 Thread Ben Goertzel

Richard Loosemore wrote:

Ben Goertzel wrote:


 It's pretty clear that humans don't run FOPC as a native code, but 
that we can learn it as a trick.   


I disagree.  I think that Hebbian learning between cortical columns 
is essentially equivalent to basic probabilistic term logic.


Lower-level common-sense inferencing of the Clyde--elephant--gray 
type falls out of the representations and the associative operations.
  
I think it falls out of the logic of spike timing dependent long term 
potentiation of bundles of synapses between

cortical columns...


The original suggestion was (IIANM) that humans don't run FOPC as a 
native code emat the level of symbols and concepts/em (i.e. the 
concept-stuff that we humans can talk about because we have 
introspective access at that level of our systems).


Now, if you are going to claim that spike-timing-dependent LTP between 
columns is where some probabilistic term logic is happening ON 
SYMBOLS, then what you have to do is buy into a story about where 
symbols are represented and how.  I am not clear about whether you are 
suggesting that the symbols are represented at:


(1) the column level, or
(2) the neuron level, or
(3) the dendritic branch level, or
(4) the synapse level, or (perhaps)
(5) the spike-train level (i.e. spike trains encode symbol patterns).

If you think that the logical machinery is visible, can you say which 
of these levels is the one where you see it?


None of the above -- at least not exactly.  I think that symbols are 
probably represented, in the brain, as dynamical patterns in the 
neuronal network.  Not strange attractors exactly -- more like 
strange transients, which behave like strange attractors but only for 
a certain period of time (possibly related to Mikhail Zak's terminal 
attractors).   However, I think that in some cases an individual column 
(or more rarely, an individual neuron) can play a key role in one of 
these symbol-embodying strange-transients. 

So, for example, suppose Columns C1, C2, C3 are closely associated with 
symbol-embodying strange transients T1, T2,  T3.


Suppose there are highly conductive synaptic bundles going in the 
directions


C1 -- C2
C2 -- C3

Then, Hebbian learning may result in the potentiation of the synaptic 
bundle going


C1 -- C3

Now, we may analyze the relationships between the strange transients T1, 
T2, T3 using Markov chains, where a high-weight link between T1 and 
T2, for example, means that P(T2|T1) is large.


Then, the above Hebbian learning example will lead to the heuristic 
inference


P(T2 | T1) is large
P(T3 | T2) is large
|-
P(T3 | T1) is large

But this is probabilistic term logic deduction (and comes with specific 
quantitative formulas that I am not giving here).


One can make similar analyses for other probabilistic logic rules. 

Basically, one can ground probabilistic inference on Markov 
probabilities between strange-transients of the neural network, in 
Hebbian learning on synaptic bundles between cortical columns.


And that is (in very sketchy form, obviously) part of my hypothesis 
about how the brain may ground symbolic logic in neurodynamics.


The subtler part of my hypothesis attempts to explain how higher-order 
functions and quantified logical relationships may be grounded in 
neurodynamics.  But I don't really want to post that on a list before 
publishing it formally in a scientific journal, as it's a bigger and 
also more complex idea.


This is not how Novamente works -- Novamente is not a neural net 
architecture.  However, Novamente does include some similar ideas.  In 
Novamente lingo, the strange transients mentioned above are called 
maps, and the role of the Hebbian learning mentioned above is played 
in NM by explicit probabilistic term logic.


So, according to my view,

In the brain: lower-level Hebbian learning on bundles of links btw 
neuronal clusters, leads to implicit probabilistic inference on 
strange-transients representing concepts


In Novamente: explicit heuristic/probabilistic inference on links btw 
nodes in NM's hypergraph datastructure, lead to implicit probabilistic 
inference on strange-transients (called maps) representing concepts


So, the Novamente approach seeks to retain the 
creativity/fluidity-supportive emergence of the brain's approach, while 
still utilizing a form of probabilistic logic rather than neuron 
emulations on the lower level.  This subtlety causes many people to 
misunderstand the Novamente architecture, because they only think about 
the lower level rather than the emergent, map level.   In terms of our 
practical Novamente work we have not done much with the map level yet, 
but we know this is going to be the crux of the system's AGI capability.


-- Ben



As I see it, ALL of these choices have their problems.  In other 
words, if the machinery of logical reasoning is actually visible to 
you in the naked hardware at any of these levels, I reckon that you 
must then commit to some 

Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace

On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote:


I think that you grossly underestimate the magnitude of what is being
proposed because the tag development environment has been attached to
it.:-)



*grin* No, I think it's a big project, at least the version I have in mind
(on my to-do list for if and when I get the chance to spend a few years on
it) - bigger than some people's estimates for full AGI. I still think actual
AGI will be much much harder.

Note that I'm not talking about a stupid architecture where you hammer

yourself flat with nine million, small database calls.  I block move chunks
of data in and out and would move anything that uses more than a certain
amount of data into the database itself and operate on it there.



How do you propose to operate on data while it's stored in the database?
Commercial data processing usually performs no significant computation, just
storing and retrieving data, and DBs typically provide facilities for doing
that; but AI needs to perform heavy computation, don't you need to have your
working set in your processes' memory for that?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace

On 2/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:


Novamente works fine on 64-bit machines -- but it took nearly a
man-month of work to 64-bit-ify the code, which was done back in 2004...



I guess I stand corrected on that one!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Ben Goertzel


 Also, why would 32 - 64 bit be a problem, provided you planned for 
it in advance?
Name all the large, long-term projects that you know of that *haven't* 
gotten bitten by something like this.  Now, name all of the large, 
long-term projects that you know of that HAVE gotten bitten repeatedly 
by the state of the art moving past something that they have custom 
programmed and can't easily integrate.  If the second number isn't a 
lot larger than the first, you're not living in my world.:-)


I think you're exaggerating the issue.  Porting the NM code from 32-64 
bit was a pain but not a huge deal, certainly a trivial % of the total 
work done on the project. 

I do not think an enterprise DB would serve well for Novamente.   I am 
pretty confident that the specialized indices we use (implemented 
directly in C++) are significantly faster than implementing comparable 
indices in an enterprise DB would be.


However, the advantage of an enterprise DB would be that you'd avoid 
some of the work involved in making NM a distributed system --- work we 
know how to do, but haven't done yet, because it's time-consuming...


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Chuck Esterbrook

On 2/20/07, Richard Loosemore [EMAIL PROTECTED] wrote:
...

It helps to remember that my target users are cognitive scientists who
want to be able to stay in a high-level thought mode (fancy way of
saying that my users ain't gonna be hackers).


Now I see why it would be a dev env, both from the examples and the
target audience. Thanks,

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Re: Languages for AGI

2007-02-20 Thread Matt Mahoney
I think choosing an architecture for AGI is a much more important problem than 
choosing a language.  But there are some things we already know about AGI.  
First, AGI requires a vast amount of knowledge, and therefore a vast amount of 
computation.  Therefore, at least part of the AGI will have to be implemented 
in a fast (perhaps parallel) language.  Second, if you plan on having a team of 
programmers do the work (rather than all by yourself) then you will have to 
choose a widely known language.

Early work in AI used languages like Lisp or Prolog to directly express 
knowledge.  Now we all know (except at Cycorp) that this does not work.  There 
is too much knowledge to code directly.  You will need a learning algorithm and 
training and test data.

The minimum requirement for AGI is a language model, which requires about 10^9 
bits of information (based on estimates by Turing and Landauer, and the amount 
of language processed by adulthood).  When you add vision, speech, robotics, 
etc., it will be more.  We don't know how much, but if we use the human brain 
as a model, then one estimate is the number of synapses (about 10^13) 
multiplied by the access rate (10 Hz) = 10^14 operations per second.  But these 
numbers are really just guesses.  Perhaps they are high, but people have been 
working
on computational shortcuts for the last 50 years without success.

My work is in data compression, which I believe is an AI problem.  (You might 
disagree, but first see my argument at 
http://cs.fit.edu/~mmahoney/compression/rationale.html ).  Whether or not you 
agree, compression, like AGI, requires a great deal of memory and CPU.  Many of 
the top compressors ranked in my benchmark are open source, and of those, the 
top languages are C++ followed by C and assembler.  I don't know of any written 
in Java, C#, Python, or any interpreted languages, or any that use relational 
databases.

AGI is amenable to parallel computation.  Language, vision, speech, and 
robotics all involve combining thousands of soft constraints.  This requires 
vector operations.  The fastest way to do this on a PC is to use the parallel 
MMX and SSE2 instructions (or a GPU) that are not accessible in high level 
languages.  The 16-bit vector dot product that I implemented in MMX as part of 
the neural network used in the PAQ compressor is 6 times faster than optimized 
C.  Fortunately you do not need a lot of assembler, maybe a couple hundred 
lines of code to do most of the work.

AGI is still an area of research.  Not only do you need fast implementations so 
your experiments finish in reasonable time, but you will need to change your 
code many times.  Train, test, modify, repeat.  Your code has to be both 
optimized and structured so that it can be easily changed in ways you can't 
predict.  This is hard, but unfortunately we do not know yet what will work.  
 
-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303