Re: [agi] mouse uploading

2007-04-29 Thread Shane Legg

Numbers for humans vary rather a lot.  Some types of cells have up to
200,000 connections (Purkinje neurons) while others have very few.
Thus talking about "the" number of synapses per neuron doesn't make
much sense.  It all depends on which type of neuron etc. you mean.

Anyway, when talking about a global brain average I most often see the
number 1,000.   For rat cortex (which is a bit different to mouse cortex
in terms of thickness and density) I usually see the number 10,000 as
the average (just for cortex, not the whole brain).

Shane


On 4/29/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:


Does anyone know if the number of synapses per neuron (8000) for mouse
cortical cells also apply to humans?  This is the first time I have seen
an
estimate of this number.  I believe the researchers based their mouse
simulation on anatomical studies.

--- "J. Storrs Hall, PhD." <[EMAIL PROTECTED]> wrote:

> In case anyone is interested, some folks at IBM Almaden have run a
> one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene
(in
>
> 0.1 real time):
>
> http://news.bbc.co.uk/2/hi/technology/6600965.stm
> http://ieet.org/index.php/IEET/more/cascio20070425/
> http://www.modha.org/papers/rj10404.pdf which reads in gist:
>
> Neurobiologically realistic, large-scale cortical and sub-cortical
> simulations
> are bound to play a key role in computational neuroscience and its
> applications to cognitive computing. One hemisphere of the mouse cortex
has
> roughly 8,000,000 neurons and 8,000 synapses per neuron. Modeling at
this
> scale imposes tremendous constraints on computation, communication, and
> memory capacity of any computing platform.
>  We have designed and implemented a massively parallel cortical
simulator
> with
> (a) phenomenological spiking neuron models; (b) spike-timing dependent
> plasticity; and (c) axonal delays.
>  We deployed the simulator on a 4096-processor BlueGene/L supercomputer
with
>
> 256 MB per CPU. We were able to represent 8,000,000 neurons (80%
excitatory)
>
> and 6,300 synapses per neuron in the 1 TB main memory of the system.
Using a
>
> synthetic pattern of neuronal interconnections, at a 1 ms resolution and
an
> average firing rate of 1 Hz, we were able to run 1s of model time in 10s
of
> real time!
>
> Josh
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-29 Thread Shane Legg

Mike,

But interestingly while you deny that the given conception of intelligence

is rational and deterministic.. you then proceed to argue rationally and
deterministically.



Universal intelligence is not based on a definition of what rationality is.
It is based
on the idea of achievement.  I believe that if you start to behave
irrationally (by any
reasonable definition of the word) then your ability to achieve goals will
go down and
thus so will your universal intelligence.


that actually you DON'T usually know what you desire. You have conflicting

desires and goals. [Just how much do you want sex right now? Can you produce
a computable function for your desire?]



Not quite.  Universal intelligence does not require that you personally can
define
your, or some other system's, goal.  It just requires that the goal is well
defined
in the sense that a clear definition could be written down, even if you
don't know
what that would look like.

If you want intelligence to include undefinable goals in the above weaker
sense
then you have this problem:

"Machine C is not intelligent because it cannot do X, where X is something
that cannot be defined."

I guess that this isn't a road you want to take as I presume that you think
that
machine intelligence is possible.


And you have to commit yourself at a given point, but that and your

priorities can change the next minute.



A changing goal is still a goal, and as such is already taken care of by the
universal intelligence measure.


And vis-a-vis universal intelligence, I'll go with Ben


"According to Ben Goertzel, Ph. D, "Since universal intelligence is only
definable up to an arbitrary constant, it's of at best ~heuristic~ value in
thinking about the constructure of real AI systems. In reality, different
universally intelligent modules may be practically applicable to different
types of problems." [8] "



Ben's comment is about AIXI, so I'll change to that for a moment.  I'm going
to have
to be a bit more technical here.

I think the compiler constant issue with Kolmogorov complexity is in some
cases
important, and in others it is not.  In the case of Solomonoff's continuous
universal
prior (see my Scholarpedia article on algorithmic probability theory for
details) the
measure converges to the true measure very quickly for any reasonable choice
of
reference machine.  With different choices of reference machine the compiler
constant may mean that the system doesn't converge for a few more bytes of
input.
This isn't an issue for an AGI system that will be processing huge amounts
of data
over time.  The optimality of its behaviour in the first hundred bytes of
its existence
really doesn't matter.  Even incomputable super AIs go through an infantile
stage,
albeit a very short one.


You seem to want to pin AG intelligence down precisely, I want to be more

pluralistic - and recognize that uncertainty and conflict are fundamental to
its operation.



Yes, I would like to pin intelligence down as precisely as possible.

I think that if somebody could do this it would be a great step forward.
I believe that issues of definition and measurement are the bedrock of
good science.

Cheers
Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

[agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Tintner
OK, here I think is the simplest test of adaptivity that cuts right to the 
heart of the matter, and provides the most central measure of ADAPTIVE (i.e. 
DIVERGENT) intelligence as distinct from CONVERGENT, algorithmic intelligence.

My assumption: your system has this agent/guy moving around a home/office 
environment. [Adjust question accordingly if not]

He has a simple task: "Move from A to B or D". But the normal answer "Walk it" 
is for whatever reason no good, blocked.

The simple test is this: how many alternative ways of moving from A to B, will 
the system (his brain) be able to search for and find?

[Sub-questions: how will alternatives be laid out in memory, and how will the 
system search for them?]

That's the basic test of adaptivity : HOW MANY ALTERNATIVE WAYS OF ACHIEVING 
ANY GOAL CAN A SYSTEM FIND?

The obvious point here is that the human brain can find a VAST number of ways 
of moving from A to B, and achieving most goals. Your brain  should have no 
serious difficulty producing thousands of ways for that guy to do it, although 
you'll quickly slow down in the pace at which you produce them. It may take 
quite a while (or not in your special case) before you get to "get someone with 
a wheelbarrow to push him."

This resourceful capacity is the central source of the human mind's adaptivity 
and creativity.

[Most people, however, I would argue, do not APPRECIATE the unlimited 
resourcefulness of the human brain (that it just won't stop looking for and 
finding alternatives if you ask it) - as soon as you do, you realise that you 
and everyone else can be as creative as you want to be].

Comments about how the human brain does this?

P.S. It may be better to talk of alternative "FAMILIES or CLASSES of ways" of 
doing things/ reaching goals, since each may involve many variations. "Walk" 
for your system might involve "step slowly", "stroll", "walk medium pace," 
"walk fast" even "walk sideways" "walk backwards" etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] rule-based NL system

2007-04-29 Thread Jean-Paul Van Belle
@ Mike: remember that she wasn't blind/deaf from birth - read her
autobiographical account (available on project gutenberg - which is an
excellent corpus source btw - also available on DVD :) for how he
finally hooked up the concept of "words" as tokens for real world
concepts when linking the word water with her memory of water when she
wasn't blind/deaf yet. it's a nice read for those who are interested in
how 'grounding of concepts' happen (tho the nuggets are far and few
inbetween). See below two extracts.

@Matt: yes as far as i remember typical human neurons typically have a
few 1000 to 1 synapses. But note that there are several other types
of neurons - especially the ones linking different brain areas together
as well as those relatively very rare (much less than 10) 'emotional
state/feeling' neurons that hook up/traverse many different areas of the
brain.


>>> "Mike Tintner" <[EMAIL PROTECTED]> 04/29/07 2:04 AM >>>
Helen Keller must have had a tough time existing without words.
According to you she didn't know the shape of the chairs she sat on. She
had no words.

>From Kellers autobiography: 
"There was, however, one word the meaning
of which I still remembered, WATER. I pronounced it "wa-wa." Even
this became less and less intelligible until the time when Miss
Sullivan began to teach me. I stopped using it only after I had
learned to spell the word on my fingers."
(and much later)
We walked down the path to the well-house, attracted by the
fragrance of the honeysuckle with which it was covered. Some one
was drawing water and my teacher placed my hand under the spout.
As the cool stream gushed over one hand she spelled into the
other the word water, first slowly, then rapidly. I stood still,
my whole attention fixed upon the motions of her fingers.
Suddenly I felt a misty consciousness as of something
forgotten--a thrill of returning thought; and somehow the mystery
of language was revealed to me. I knew then that "w-a-t-e-r"
meant the wonderful cool something that was flowing over my hand.
That living word awakened my soul, gave it light, hope, joy, set
it free! There were barriers still, it is true, but barriers that
could in time be swept away.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


[agi] Re: HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN? P.S.

2007-04-29 Thread Mike Tintner
Ah.. I didn't get the test quite right. It isn't simply "how many alternative 
ways can you find of achievng any goal?"

You might have pre-specified a vast number of ways of moving from A to B for 
the system.

The test is: " how many NEW (non-specified) alternative ways can you find of 
achieving any goal?"

In other words, my assumption is that, like the human brain, your system's 
database will know many ways of moving from A to B, that have not yet been used 
in this connection - ways to move around the world that you have not yet used 
or specified as ways to move around a home/ office environment, such as "ride a 
bike", "get pulled on a trolley", "use a scooter"  The true test of adaptivity 
therefore is a test of your system's capacity to find NEW ways of achieving 
goals from within its database.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Benjamin Goertzel



My assumption: your system has this agent/guy moving around a home/office
environment. [Adjust question accordingly if not]

He has a simple task: "Move from A to B or D". But the normal answer "Walk
it" is for whatever reason no good, blocked.

The simple test is this: how many alternative ways of moving from A to B,
will the system (his brain) be able to search for and find?

[Sub-questions: how will alternatives be laid out in memory, and how will
the system search for them?]

That's the basic test of adaptivity : HOW MANY ALTERNATIVE WAYS OF
ACHIEVING ANY GOAL CAN A SYSTEM FIND?




Oh, Novamente will find a hell of a lot of different ways of achieving a
simple goal like that, even in its current form...

Its two main learning modules -- evolutionary learning and probabilistic
inference -- are both quite good at diversity generation...

But actually, your question bespeaks a certain unfamiliarity with standard
GOFAI type technology.  Generating a lot of different alternatives and then
pruning down the space is pretty standard stuff.

What is harder is as follows: if goals G1 and G2 look to be related, and the
system has learned a bunch of ways to fulfill G1; then, this latter fact
should make it easier for the system to find ways to fulfill G2 (than if it
hadn't learned ways to fulfill G1 already).  This is known as "transfer
learning" and has proved more challenging for AI systems than simply
generating diverse plans for the same goal.

Because you can generate diverse plans without arriving at a deep
understanding of the goal and the space within which it is situated; but
transfer learning, except in lucky cases, requires real insight...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Benjamin Goertzel


What is harder is as follows: if goals G1 and G2 look to be related, and
the system has learned a bunch of ways to fulfill G1; then, this latter fact
should make it easier for the system to find ways to fulfill G2 (than if it
hadn't learned ways to fulfill G1 already).  This is known as "transfer
learning" and has proved more challenging for AI systems than simply
generating diverse plans for the same goal.

Because you can generate diverse plans without arriving at a deep
understanding of the goal and the space within which it is situated; but
transfer learning, except in lucky cases, requires real insight...

Ben G





And, this is one of our internal, intermediate intelligence tests for
Novamente: for simple, related  goals G1 and G2, see how well its transfer
learning capabilities work...

This is a topic currently under discussion on NM's internal email list, for
example...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-29 Thread Benjamin Goertzel

Shane:

"According to Ben Goertzel, Ph. D, "Since universal intelligence is only

> definable up to an arbitrary constant, it's of at best ~heuristic~ value in
> thinking about the constructure of real AI systems. In reality, different
> universally intelligent modules may be practically applicable to different
> types of problems." [8] "
>

Ben's comment is about AIXI, so I'll change to that for a moment.  I'm
going to have
to be a bit more technical here.

I think the compiler constant issue with Kolmogorov complexity is in some
cases
important, and in others it is not.  In the case of Solomonoff's
continuous universal
prior (see my Scholarpedia article on algorithmic probability theory for
details) the
measure converges to the true measure very quickly for any reasonable
choice of
reference machine.  With different choices of reference machine the
compiler
constant may mean that the system doesn't converge for a few more bytes of
input.
This isn't an issue for an AGI system that will be processing huge amounts
of data
over time.  The optimality of its behaviour in the first hundred bytes of
its existence
really doesn't matter.  Even incomputable super AIs go through an
infantile stage,
albeit a very short one.



I would prefer to remain with finite binary sequences for purposes of
discussion, as
I find the introduction of infinity brings a lot of potential for
philosophical confusion.

Are you claiming that the choice of "compiler constant" is not pragmatically
significant in the definition of the Solomonoff-Levin universal prior, and
in Kolmogorov
complexity?  For finite binary sequences...

I really don't see this, so it would be great if you could elaborate.

In a practical Novamente context, it seems to make a big difference.  If we
make different choices regarding the internal procedure-representation
language Novamente uses, this will make a big difference in what
internally-generated programs NM thinks are simpler ... which will make a
big difference in which ones it retains versus forgets; and which ones it
focuses its attention on and prioritizes for generating actions.

To use another pragmatic example, both LISP and FORTRAN have universal
computing power, but, some programs are **way** shorter to code in LISP than
FORTRAN, and this makes a big practical difference.  Even though it's true
that

length(P in FORTRAN) = length(P in LISP) + O(1)

These O(1) contents, that seem so insignificant in abstract theory, can make
a big difference in reality at the human scale...

???

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

[agi] Uh oh ... someone has beaten Novamente to the end goal!

2007-04-29 Thread Benjamin Goertzel

Just kidding ;-)

-- Forwarded message --
From: Robin Lee Powell <[EMAIL PROTECTED]>
Date: Apr 29, 2007 2:58 AM
Subject: About the wierdest mail I've ever received.  [
[EMAIL PROTECTED]: confidential and revolutionary message]
To: [EMAIL PROTECTED]


Here's me giving up on the chance to further the nature of AI
research.

-_-

Anyone else get this?  I'm wondering if someone is trying to do the
AI Box experiment without the self-selected sample aspect.

-Robin, who would have kept talking to this person if he didn't
know, with total certainly that We Can't Do This Yet (tm).

- Forwarded message from black_box_experiment <
[EMAIL PROTECTED]> -

Subject: confidential and revolutionary message
From: black_box_experiment <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Date: Sun, 29 Apr 2007 05:17:07 -

Hello!

I am a famous young researcher in artificial intelligence and
psychology and I am doing an experiment on myself: transfering my
mind to a computer program, this is serious.

I would like to know if you would be interested in participating in a
amazing experiment, called the black box experiment:

I will be shut as a prisoner in a room or a box and transfer my
skills/knowledge/mind to a computer program, and another computer
program will tell you if you can discharge me or not from this box. I
am doing that experiment for my Ph.D, this is a VERY SERIOUS AND
CONFIDENTIAL MESSAGE the result will be that all my thoughts and mind
will be in a computer program, and therefore this will be the most
amazing scientific revolution in artificial intelligence and in
history, and I am serious!

with the black box experiment I will behave like a computer, I will
become transhuman/posthuman and become your new computer , I AM
REALLY SERIOUS AND THIS A SCIENTIFIC EXPERIMENT FOR MY PHD!

Holding a new computer at your home such as myself will take very
little space( less than 2 square meters) and this will never waste
your time (you can use your new computer whenever you want) and you
will be of course able to continue your private life with your
friends/ boy friend without any change

please reply and tell me what you think of that?

Your future computer

==

This message is confidential and must not be published/forwarded



- End forwarded message -

--
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-29 Thread Russell Wallace

On 4/29/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:


To use another pragmatic example, both LISP and FORTRAN have universal
computing power, but, some programs are **way** shorter to code in LISP than
FORTRAN, and this makes a big practical difference.  Even though it's true
that

length(P in FORTRAN) = length(P in LISP) + O(1)

These O(1) contents, that seem so insignificant in abstract theory, can
make a big difference in reality at the human scale...



Sure, but that doesn't matter to Kolmogorov complexity. Why? Because the KC
addendum is bounded by the amount of code required to write a Lisp
interpreter in Fortran - and this is rather small, even on the human scale!

In practice most people don't do this (even when the program does end up
being much longer in Fortran than Lisp) for various reasons: they don't know
how to write a Lisp interpreter, they need the runtime speed of compiled
Fortran, Lisp syntax gives them a headache, their editor doesn't match
brackets, their boss will get angry if they program in a language nobody
else in the department knows or wants to learn, etc. But those are different
things than KC.

In other words, I agree with you that in practice representation matters, I
just think KC isn't necessarily a very helpful way to look at the issue.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-29 Thread Shane Legg

Ben,

Are you claiming that the choice of "compiler constant" is not pragmatically


significant in the definition of the Solomonoff-Levin universal prior, and
in Kolmogorov
complexity?  For finite binary sequences...

I really don't see this, so it would be great if you could elaborate.



In some cases it matters, in others it doesn't.  Solomonoff's prediction
error
theorem shows that the total summed expected squared prediction error is
bounded by a constant when the true generating distribution \mu is
computable.
The constant is (ln 2)/2 K(\mu) bits.  The K term in this bound depends on
the
choice of reference machine.  For a reasonable choice of reference machine
you might be able to push the bound up by something like 1000 bits.  If you
are considering long running systems that will process large amounts of
data,
that 1000 extra bits is tiny.

On the other hand, if you want to know if K(10) < K(147) then your answer
will depend on which reference machine you use.  In short: Kolmogorov
complexity works well for reasonably big objects, it doesn't work well for
small objects.

Probably the best solution is to condition the measure with information
about
the world.  In which case K(10|lots of world data) < K(147|lots of world
data)
should work the way you expect.  Google complexity works this way.
In the case of Solomonoff induction, you let the predictor watch the world
for
a while before you start trying to get it to solve prediction tasks.


In a practical Novamente context, it seems to make a big difference.  If we

make different choices regarding the internal procedure-representation
language Novamente uses, this will make a big difference in what
internally-generated programs NM thinks are simpler ... which will make a
big difference in which ones it retains versus forgets; and which ones it
focuses its attention on and prioritizes for generating actions.



I think that the universal nature we see in Kolmogorov complexity should
also apply to practical AGI systems.  By that I mean the following:

By construction, things which have high Kolmogorov complexity are complex
with respect to any reasonable representation system.  In essence, the
reference
machine is your representation system.  Once an AGI system has spent some
time learning about the world I expect that it will also find that there are
certain
types of representation systems that work well for certain kinds of
problems.
For example, it might encounter a problem that seems complex, but then it
realises that, say, if it views the problem as a certain type of algebra
problem
then it knows how to find a solution quite easily.  I think that the hardest
part
to finding a solution to a difficult problem often lies in finding the right
way to
view the problem, in order words, the right representation.

Cheers
Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Re: HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN? P.S.

2007-04-29 Thread Bob Mottram

There are numerous ways in which a goal could be achieved, and in the human
case this is usually just by traditional thinking, i.e. observing other
people completing the task, then trying to copy the way they did it.  The
relative social status of the demonstrator to the observer dictates how
probable they will consider the solution to be.  Few people come up with
entirely original ideas about how to do things, and when they do they're
quickly copied.  An AGI capable of learning from demonstration by humans
would need to be able to follow these same rules.



On 29/04/07, Mike Tintner <[EMAIL PROTECTED]> wrote:


 Ah.. I didn't get the test quite right. It isn't simply "how many
alternative ways can you find of achievng any goal?"

You might have pre-specified a vast number of ways of moving from A to B
for the system.

The test is: " how many NEW (non-specified) alternative ways can you find
of achieving any goal?"

In other words, my assumption is that, like the human brain, your system's
database will know many ways of moving from A to B, that have not yet been
used in this connection - ways to move around the world that you have not
yet used or specified as ways to move around a home/ office environment,
such as "ride a bike", "get pulled on a trolley", "use a scooter"  The true
test of adaptivity therefore is a test of your system's capacity to find NEW
ways of achieving goals from within its database.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Circular definitions of intelligence

2007-04-29 Thread Benjamin Goertzel

 I think that the hardest part

to finding a solution to a difficult problem often lies in finding the
right way to
view the problem, in order words, the right representation.

Cheers
Shane



Yes ... but, what this means is that a critical task of AGI design is to be
sure
your AGI has, or has the capability to relatively easily learn, a good
**representation
for representing representations**.

Once it has a good meta-representation language like this, then indeed, it
can
aptly find the right representations to match with problems it is presented
with.

But learning the right meta-representation language is, I suspect, most of
what
happens on the way from Piaget's infantile stage to Piaget's formal stage.

I believe the human brain is "constructed" so as to grow a certain sort
of meta-representation language within itself, based on childhood neural
development in
conjunction with the experiences of a typical childhood.

Even though any general meta-representation language can be translated into
any
other one within O(1) [i.e. the length of the translation program is
constant with respect to the size of the representation beng expressed], in
practice the choice of meta-representation language is going to be critical
to the viability of the intelligent system...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Re: [agi] Uh oh ... someone has beaten Novamente to the end goal!

2007-04-29 Thread Mike Dougherty

On 4/29/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

Holding a new computer at your home such as myself will take very
little space( less than 2 square meters) and this will never waste
your time (you can use your new computer whenever you want) and you
will be of course able to continue your private life with your
friends/ boy friend without any change


Sounds like a new way to get distributed IP addresses for SPAM/ddos -
"Put my uploaded consciousness contained in this 2 square meter box on
your network and go about your normal life"

Yeah, right.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty

On 4/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

He has a simple task: "Move from A to B or D". But the normal answer "Walk
it" is for whatever reason no good, blocked.


Disambiguate-
1. Move from starting point A to either B or D
2. Move from either A to B or take another option D

I feel we should practice unambiguous speech with each other so we can
have some hope of conversation with machine intelligence.  The less
guess it has to do about what we actually meant, the more productive
the dialog can be.  It helps between people too.

humorous example:  My wife and I had finished a discussion about
various ways to contain the dog in our yard.  After a long pause, I
asked, "How would you feel about fencing in our yard?"  She threw me a
shocked expression and asked, "Are you challenging me to a duel?"  I
laughed, "I meant: putting a fence around the yard, she understood: a
sword fight with rapiers"

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Richard Loosemore

Mike Dougherty wrote:

On 4/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
He has a simple task: "Move from A to B or D". But the normal answer 
"Walk

it" is for whatever reason no good, blocked.


Disambiguate-
1. Move from starting point A to either B or D
2. Move from either A to B or take another option D

I feel we should practice unambiguous speech with each other so we can
have some hope of conversation with machine intelligence.  The less
guess it has to do about what we actually meant, the more productive
the dialog can be.  It helps between people too.

humorous example:  My wife and I had finished a discussion about
various ways to contain the dog in our yard.  After a long pause, I
asked, "How would you feel about fencing in our yard?"  She threw me a
shocked expression and asked, "Are you challenging me to a duel?"  I
laughed, "I meant: putting a fence around the yard, she understood: a
sword fight with rapiers"


The idea that human beings should constrain themselves to a simplified, 
artificial kind of speech in order to make life easier for an AI, is one 
of those Big Excuses that AI developers have made, over the years, to 
cover up the fact that they don't really know how to build a true AI.


It is a temptation to be resisted.

No retreat to hard-coded blocks world programs.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Tintner

Er sorry.

The ambiguity arises because I wanted the journey to be a reasonable 
distance...let's say through two or three rooms/ corridors or more - a 
distance where a range of solutions like those proposed - use a scooter/ 
have s.o. carry you in a wheelbarrow ... would be relevant - as opposed to 
say, moving the next three feet, (where there still might be a range of new 
solutions but much fewer).


So in fact, A to B will do as long as you understand it's a reasonable 
distance.


But it's open to you to create your own variations of the problem scenario. 
One shouldn't be too pedantic or literal.




- Original Message - 
From: "Mike Dougherty" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, April 29, 2007 6:21 PM
Subject: Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?



On 4/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
He has a simple task: "Move from A to B or D". But the normal answer 
"Walk

it" is for whatever reason no good, blocked.


Disambiguate-
1. Move from starting point A to either B or D
2. Move from either A to B or take another option D

I feel we should practice unambiguous speech with each other so we can
have some hope of conversation with machine intelligence.  The less
guess it has to do about what we actually meant, the more productive
the dialog can be.  It helps between people too.

humorous example:  My wife and I had finished a discussion about
various ways to contain the dog in our yard.  After a long pause, I
asked, "How would you feel about fencing in our yard?"  She threw me a
shocked expression and asked, "Are you challenging me to a duel?"  I
laughed, "I meant: putting a fence around the yard, she understood: a
sword fight with rapiers"

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.2/780 - Release Date: 29/04/2007 06:30






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] rule-based NL system

2007-04-29 Thread Mark Waser
Again, in bold blue below.

 Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Sunday, April 29, 2007 1:01 AM
  Subject: Re: [agi] rule-based NL system


  Mark,

  >> I need to know a bit more about your approach.  What do you mean when you 
say "grammar is embedded in your KR"?

  The knowledge representation scheme is based upon the idea that language is 
the substrate of basic cognition.  Thus, everything in it should be classified, 
categorized, viewed and treated in the way in which language has evolved to 
treat it.  This approach can give a number of insights and help restrict the 
problem.  For example, the normal KR object is viewed as a noun and the classic 
links are viewed as sentences with the nouns/objects occupying slots whose 
characteristics and behavior are inspired by grammar.  There is also a lot of 
hierarchy that is implicit in language so you immediately realize that a noun 
slot can be filled by either a simple noun or a noun clause and that the same 
holds true of verbs, etc.  You also realize that language gives you a lot of 
inheritance hierarchies -- particularly with verbs where it is most important 
for simplification and efficiently storing restrictions (i.e. how many verbs 
can be generalized to simply "move" or "give"?).  Grammar also gives you an 
excellent idea of what (additional) information you should have available at 
any given point.

  >> For an example rule like "NP --> det noun", how is it represented or 
"embedded" in your scheme? 

  Noun and object are equivalent in the scheme as long as you realize that 
objects are reducible (i.e. that an object may also be a noun phrase instead of 
a simple noun).  Links can be simple verbs or monstrous collections of facts 
themselves.

  >> Your approach may have these problems:
  >> 1.  you cannot learn a new NL;  English is hard-wired in your KR

  Nope.  Primarily I'm using those structural aspects of grammar which are 
invariant across languages.  Cognition is multi-dimensional but language is 
primarily one-dimensional.  The compression of the multiple dimensions of 
cognition down to the single dimension of language is where languages differ 
and the vast majority of that difference is labels (different words in 
different languages) and different output ordering.  The structures are 
fundamentally the same across languages.  

  As I've said before about learning a new language -- "I think that all it 
would require would be tagging each word with a language, a languageA to 
languageB dictionary, and a quick overhaul of the parser and generator to make 
the link types be language specific.  And yes, I *am* saying/claiming that I 
believe that this approach will pretty much automatically give you natural 
language translation."

  2.  you may have difficulty interpreting "irregular" sentences.  For 
examples: "Better is the enemy of good" or "I am so not into this stuff".  Your 
texts need to be 100% grammatical. 

  Nope.  This focus actually gives me a better shot at "irregular" sentences 
since not only can I tell when something is ungrammatical but I actually have a 
decent idea of what is missing so I can actively try to derive it.  Much of the 
time, sentences are ungrammatical because something is implied and left out 
i.e. "Stop that!".

  Oh, and by the way, your two sentence examples are not irregular in grammar 
at all.  What throws you in the former is that the fact that "better" is 
occupying a noun slot when it normally an adjective.  In this case, however, 
the grammar *correct* because you are talking about the noun/the concept of 
better.  What throws you in the latter is that you don't think of "very" as 
being a synonym/definition of "so".  In both cases, relying on grammar makes 
your life tremendously easier because it *tells you* when something is not 
being used in the most common fashion.

  3.  you may have problems doing "meta-linguistic" reasoning, ie, reasoning 
*about* language itself.  For example, recognizing the peculiar speech pattern 
of Yoda in Star Wars, or... (can't think of more examples now). 

  Nope.  The systems reaction to Yoda would be the same as yours . . . . What 
the heck?  His sentences have all the structures they are supposed to (i.e. 
subject, verb, object) but they're always backwards.  And then it wil cope with 
it quite well.  Note that we don't normally rely on grammatical order for the 
simplest sentences and the system needn't either after a while.

  In my approach, everything is represented by rules, therefore it has the most 
*generality*.  Your critique is that it is computationally too slow, but I can 
use the following speed-up tricks: 
  1.  human-assisted disambiguation (asking the user questions etc)
  2.  restrict to Basic English and short sentences
  3.  other heuristics to improve the inference engine, eg using word frequency 
statistics

  It's a fundamental trade-off -- speed for flexibility.  Your "spee

SV: [agi] mouse uploading

2007-04-29 Thread Jan Mattsson
Has this approach been successful for any "lesser" animals? E.g.; has anyone 
simulated an insect brain system connected to a simulated insect body in a 
virtual environment? Starting with a mouse brain seems a bit ambitious.

Since I haven't posted on the list before I guess I should introduce myself: 
I'm Jan Mattsson in Stockholm, Sweden. A software developer by profession, I 
first became interested in AI when I read "Gödel Escher Bach - an Eternal 
Golden Braid" many years ago (actually switched from physics to computer 
science because of it). More recently I read Kurzweil's "The Singularity is 
near", that brought me here.

/JanM


-Ursprungligt meddelande-
Från: J. Storrs Hall, PhD. [mailto:[EMAIL PROTECTED]
Skickat: lö 2007-04-28 19:15
Till: agi@v2.listbox.com
Ämne: [agi] mouse uploading
 
In case anyone is interested, some folks at IBM Almaden have run a 
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in 
0.1 real time):

http://news.bbc.co.uk/2/hi/technology/6600965.stm
http://ieet.org/index.php/IEET/more/cascio20070425/
http://www.modha.org/papers/rj10404.pdf which reads in gist:

Neurobiologically realistic, large-scale cortical and sub-cortical simulations 
are bound to play a key role in computational neuroscience and its 
applications to cognitive computing. One hemisphere of the mouse cortex has 
roughly 8,000,000 neurons and 8,000 synapses per neuron. Modeling at this 
scale imposes tremendous constraints on computation, communication, and 
memory capacity of any computing platform. 
 We have designed and implemented a massively parallel cortical simulator with 
(a) phenomenological spiking neuron models; (b) spike-timing dependent 
plasticity; and (c) axonal delays.  
 We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 
256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) 
and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a 
synthetic pattern of neuronal interconnections, at a 1 ms resolution and an 
average firing rate of 1 Hz, we were able to run 1s of model time in 10s of 
real time!

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936<>

Re: SV: [agi] mouse uploading

2007-04-29 Thread Charles D Hixson
I think someone at UCLA did something similar for lobsters.  This was 
used as material for an SF story ("Lobsters", Charles Stross[sp?])


Jan Mattsson wrote:

Has this approach been successful for any "lesser" animals? E.g.; has anyone 
simulated an insect brain system connected to a simulated insect body in a virtual 
environment? Starting with a mouse brain seems a bit ambitious.

Since I haven't posted on the list before I guess I should introduce myself: I'm Jan Mattsson in 
Stockholm, Sweden. A software developer by profession, I first became interested in AI when I read 
"Gödel Escher Bach - an Eternal Golden Braid" many years ago (actually switched from 
physics to computer science because of it). More recently I read Kurzweil's "The Singularity 
is near", that brought me here.

/JanM


-Ursprungligt meddelande-
Från: J. Storrs Hall, PhD. [mailto:[EMAIL PROTECTED]
Skickat: lö 2007-04-28 19:15
Till: agi@v2.listbox.com
Ämne: [agi] mouse uploading
 
In case anyone is interested, some folks at IBM Almaden have run a 
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in 
0.1 real time):


http://news.bbc.co.uk/2/hi/technology/6600965.stm
http://ieet.org/index.php/IEET/more/cascio20070425/
http://www.modha.org/papers/rj10404.pdf which reads in gist:

Neurobiologically realistic, large-scale cortical and sub-cortical simulations 
are bound to play a key role in computational neuroscience and its 
applications to cognitive computing. One hemisphere of the mouse cortex has 
roughly 8,000,000 neurons and 8,000 synapses per neuron. Modeling at this 
scale imposes tremendous constraints on computation, communication, and 
memory capacity of any computing platform. 
 We have designed and implemented a massively parallel cortical simulator with 
(a) phenomenological spiking neuron models; (b) spike-timing dependent 
plasticity; and (c) axonal delays.  
 We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 
256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) 
and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a 
synthetic pattern of neuronal interconnections, at a 1 ms resolution and an 
average firing rate of 1 Hz, we were able to run 1s of model time in 10s of 
real time!


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty

On 4/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The idea that human beings should constrain themselves to a simplified,
artificial kind of speech in order to make life easier for an AI, is one
of those Big Excuses that AI developers have made, over the years, to
cover up the fact that they don't really know how to build a true AI.

It is a temptation to be resisted.

No retreat to hard-coded blocks world programs.


You're right - we should continue to use language poorly as is our
right as humans to communicate past each other without identifying the
failure of either the sender or the recipient for message integrity.
I see now how that makes much more sense for email lists, so it should
apply well to "true AGI"

I'm not exactly clear on "true AGI" - do any humans possess this trait?

ok, I know there's a snarky tone here, but I thought I had a valid
point (I'm sure I'll be shown my error soon enough)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Tintner

Mike,

There is something fascinating going on here - if you could suspend your 
desire for precision, you might see that you are at least half-consciously 
offering contributions as well as objections. (Tune in to your constructive 
side).


I remember thinking that you were probably undercutting yourself with the 
example of the elephant and the chair. Here you certainly are.


What you offered was a fine example of human adaptivity. Your wife took a 
fairly straightforward sentence "How would you feel about fencing in our 
yard?"  and found a new kind of meaning for it - a new and surprising kind 
of way of achieving the goal of understanding it - switched from the obvious 
meaning of fencing to the fighting meaning. That's classic adaptivity.


Jokes do this all the time - see Arthur Koestler's The Ghost in the Machine. 
They are another form of adaptivity/ creativity.


[Another comparable example would be the Airport-type joke:
A: You can't mean: go to the hospital, surely?
B; Yes I do. And don't call me Shirley.]

The reality of human communication is that we are always open as we are with 
jokes - for doubles entendres, double triple meanings. It is an inevitable 
part of using language.


There is absolutely no way of using language with the precision you require. 
It's a long discredited philosopher's dream.


So an AGI machine - that is to use language - must be prepared for 
ambiguities all the time.


A reasonable response of an AGI machine to my "A to B or D" sentence would 
be like yours - "do you mean this or that?" To try and resolve the 
ambiguity. Although I wouldn't call that adaptivity,  more a standard, 
algorithmic response.


A more reasonable - and perhaps somewhat adaptive - response, however, would 
I suggest have been along the lines of: "does it matter which is the 
meaning? - the problem he's setting (finding another kind of way to move to 
a goal) remains the same with either meaning."  There would have been a 
certain minimal adaptivity in then suspending/ varying the normal response.


(In my adaptive examples for new ways of getting from A to B - I tried to 
use striking examples - like wheelbarrows etc - but of course the borderline 
between what is a standard, algorithmic way of achieving a goal and an 
adaptive way, may be much shadier).


P.S. Another interesting question that comes back to the issue of : how does 
the human brain connect up these very different alternative ways of 
achieving goals - is: how did your brain jump from the "do you mean A to B 
or D ?" etc question/ objection to the "fencing in the yard" example?


The thought occurs to me that your objection may have been trying to "fence 
me in" (by asking actually for the impossible re language precision) - & 
that led to the joke connection (which may actually have been the other side 
of your mind demonstrating that there is always another way to understand 
language sentences).





- Original Message - 
From: "Mike Dougherty" <[EMAIL PROTECTED]>

To: 
Sent: Monday, April 30, 2007 2:44 AM
Subject: Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?



On 4/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The idea that human beings should constrain themselves to a simplified,
artificial kind of speech in order to make life easier for an AI, is one
of those Big Excuses that AI developers have made, over the years, to
cover up the fact that they don't really know how to build a true AI.

It is a temptation to be resisted.

No retreat to hard-coded blocks world programs.


You're right - we should continue to use language poorly as is our
right as humans to communicate past each other without identifying the
failure of either the sender or the recipient for message integrity.
I see now how that makes much more sense for email lists, so it should
apply well to "true AGI"

I'm not exactly clear on "true AGI" - do any humans possess this trait?

ok, I know there's a snarky tone here, but I thought I had a valid
point (I'm sure I'll be shown my error soon enough)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.2/780 - Release Date: 29/04/2007 06:30






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936