[agi] Interpreting Brain damage experiments

2007-12-07 Thread Dennis Gorelik
Richard,

 Did you know, for example, that certain kinds of brain damage can leave
 a person with the ability to name a visually presented object, but then
 be unable to pick the object up and move it through space in a way that
 is consistent with the object's normal use . and that another type
 of brain damage can result in a person have exactly the opposite 
 problem:  they can look at an object and say I have no idea what that
 is, and yet when you ask them to pick the thing up and do what they
 would typically do with the object, they pick it up and show every sign
 that they know exactly what it is for (e.g. object is a key:  they say
 they don't know what it is, but then they pick it up and put it straight
 into a nearby lock).

 Now, interpreting that result is not easy, but it does seem to tell us
 that there are two almost independent systems in the brain that handle
 vision-for-identification and vision-for-action.

That's not exact explanation.
In both cases vision module works good.
Vision-to-identification works fine in both cases.

In this case identified object cannot produce proper actions, because
connection with action module was damaged.

In another case identified object cannot be resolved into language
concept, because connection with language module was damaged.

Agree?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73488174-e8e4c8


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Well, I'm not sure if  not doing logic necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system 
consistently followed can classify as rational. If for example, a 
program consistently does Freudian free association and produces nothing 
but a chain of associations with some connection:


bird - - feathers - four..tops 

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of 
rules. And the rules could, for argument's sake, specify that every step 
is illogical - as in breaking established rules of logic - or that steps 
are alternately logical and illogical.  That too would be rational. 
Neural nets from the little I know are also rational inasmuch as they 
follow rules. Ditto Hofstadter  Johnson-Laird from again the little I 
know also seem rational - Johnson-Laird's jazz improvisation program 
from my cursory reading seemed rational and not truly creative.


Sorry to be brief, but:

This raises all sorts of deep issues about what exactly you would mean 
by rational.  If a bunch of things (computational processes) come 
together and each contribute something to a decision that results in 
an output, and the exact output choice depends on so many factors coming 
together that it would not necessarily be the same output if roughly the 
same situation occurred another time, and if none of these things looked 
like a rule of any kind, then would you still call it rational?


If the answer is yes then whatever would count as not rational?


Richard Loosemore



I do not know enough to pass judgment on your system, but  you do strike 
me as a rational kind of guy (although probably philosophically much 
closer to me than most here  as you seem to indicate).  Your attitude to 
emotions seems to me rational, and your belief that you can produce an 
AGI that will almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using 
the word with a small c, rather than the social kind, where someone 
produces a new idea that no one in society has had or published before). 
If a system can change its approach and rules of reasoning at literally 
any step of problem-solving, then it is truly crazy/ irrational (think 
of a crazy path). And it will be capable of producing all the human 
irrationalities that I listed previously - like not even defining or 
answering the problem. It will by the same token have the capacity to be 
truly creative, because it will ipso facto be capable of lateral 
thinking at any step of problem-solving. Is your system capable of that? 
Or anything close? Somehow I doubt it, or you'd already be claiming the 
solution to both AGI and computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't -  I actually don't think - that you will get all 
pedantic on me like so many AI-ers  say ah but we already have 
programs that can modify their rules. Yes, but they do that according 
to metarules - they are still basically rulebound. A crazy/ creative 
program is rulebreaking (and rulecreating) - can break ALL the rules, 
incl. metarules. Rulebound/rulebreaking is one of the most crucial 
differences between narrow AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create 
programs (and what are the programs/ systems) that are either 
irrational or non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest: Hofstadter's group builds all kinds of programs that do 
things without logic.  Phil Johnson-Laird (and students) used to try 
to model reasoning ability using systems that did not do logic.  All 
kinds of language processing people use various kinds of neural nets:  
see my earlier research papers with Gordon Brown et al, as well as 
folks like Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and 
Tyler used something called a Cohort Model to describe some aspects 
of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people 
do not use systems that do any kind of logical processing.  I could 
go on indefinitely.  There are probably hundreds of them.  They do not 
try to build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform 
and often fierce resistance both on this and another AI forum.


Hey, join the club!  You have read my little brouhaha with 

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Jean-Paul Van Belle
Sounds like the worst case scenario: computations that need between say 20 and 
100 PCs. Too big to run on a very souped up server (4-way Quad processor with 
128GB RAM) but to scale up to a 100 Beowulf PC cluster typically means a factor 
10 slow-down due to communications (unless it's a 
local-data/computation-intensive algorithm) so you actually haven't gained much 
in the process. {Except your AGI is now ready for a distributed computing 
environment, which I believe luckily Novamenta was explicitely designed for.} 
:)
 
=Jean-Paul
 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

 Benjamin Goertzel [EMAIL PROTECTED] 2007/12/07 15:06 
I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73568490-365c88

Re: Re[2]: [agi] Solution to Grounding problem

2007-12-07 Thread Mike Tintner
Your bot is having a conversation - in words. Words are in fact continually 
made sense of - grounded -  by the human brain - converted into sensory 
images - and have to be.


I've given simple examples of snatches of conversation, which are in fact 
obviously thus grounded and have to be.


The only way a human - or a machine - can make sense of sentence 1 is by 
referring to a mental image/movie of Bush walking. Merely referring to more 
words won't cut it.  Ditto for sentence 2 -  it is essential to refer to an 
image of Dennis to establish whether he is handsome. If s.o. asks me right 
now if you are handsome, I can half understand the words, but I can't 
understand if they are true, because I have never seen you (although I'm 
sure you're incredibly butch). Ditto with sentence 3, a human or a machine 
can only really tell whether a person's dialogue is getting emotional, by 
forming a sensory/sound image of the dialogue on the page and thus of the 
tone - which you do all the time whether you're aware of it or not.


Words and all symbols are totally abstract - if you don't have a sensory 
image of what they refer to, you can't understand or ground them - that's 
the grounding problem. Get me a grundchen, Dennis. Meaningless. 
Ungrounded. But if I show you a picture of a grundchen, you will have no 
problem knowing what it is, and getting one.


Oh,  just to make your day, if you don't have a body, you can't understand 
the images either - because all images have a POV - and are at a distance 
from an observer - which will take a little more time to explain. That's the 
extended grounding problem.


Is all that clear? If it is, it's grounded.





Mike,

Was it your explanation of what Grounding Problem is?

If it was - you missed the explanation and gave only examples ...


Dennis:  1) Grounding Problem (the *real* one, not the cheap substitute

that

everyone usually thinks of as the symbol grounding problem).



 Say, we are trying to build AGI for the purpose of running intelligent

chat-bot.

What would be the grounding problem in this case?



Example: understanding:



Bush walks like a cowboy, doesn't he?
Dennis Gorelik is v. handsome, no?
You're getting v. emotional about this




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.503 / Virus Database: 269.16.17/1176 - Release Date: 
12/6/2007 11:15 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73559499-725a29


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote:


  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 

 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions.

Problems that would likely go beyond the capability of a current PC to solve
in a realistic amount of time, in the
current NM architecture, would include for instance:

-- Learning a new type of linguistic relationship (in the context of
link grammar, this would mean e.g. learning a new grammatical link type)

-- Learning a new truth value formula for a probabilistic inference rule

-- Recognizing objects in a complex, rapidly-changing visual scene

(Not that we have written the code to let the system solve these particular
problems yet ... but the architecture should allow it...)

I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73544012-c56a06


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bob Mottram
If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much.  Provided that I had the right knowledge I
think I could produce a proof of concept type AGI on a single PC
today, even if it ran like a tortoise.  It's the knowledge which is
mainly lacking I think.

Although I do a lot of stuff with computer vision I find myself not
being all that restricted by computational limitations.  This
certainly wasn't the case a few years ago.  Generally even the lowest
end hardware these days has enough compute power to do some pretty
sophisticated stuff, especially if you include the GPU.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73586802-476a69


Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Vladimir Nesov
I have a doubt about role of stochastic variance in this parallel
terraced scan as it proceeds in humans (or could proceed with the same
functional behavior in AIs). Could it be that low-level mechanisms are
not that stochastic and just compute a 'closure' of given context?
Closure brings up a specific collection of answer-candidates, and if
they are unsatisfactory or if there is time to contemplate some more,
deliberation level slightly changes a context by introducing
particular bias in it, so that 'closure' gives a different set of
answers.

Effectively, this process is separated on two levels, where low-level
process doesn't work stochastically, and high-level process messes
with initial conditions on low-level process, using some kind of
ad-hoc pseudorandom generation of biases (for example, based on
collection of simple procedures that iterate on available concepts).
It certainly feels this way introspectively, and I'm not sure how it
can be determined experimentally, probably by delays between phases of
this process.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73604704-0ab273


Re: [agi] Interpreting Brain damage experiments

2007-12-07 Thread Richard Loosemore

Dennis Gorelik wrote:

Richard,


Did you know, for example, that certain kinds of brain damage can leave
a person with the ability to name a visually presented object, but then
be unable to pick the object up and move it through space in a way that
is consistent with the object's normal use . and that another type
of brain damage can result in a person have exactly the opposite 
problem:  they can look at an object and say I have no idea what that

is, and yet when you ask them to pick the thing up and do what they
would typically do with the object, they pick it up and show every sign
that they know exactly what it is for (e.g. object is a key:  they say
they don't know what it is, but then they pick it up and put it straight
into a nearby lock).



Now, interpreting that result is not easy, but it does seem to tell us
that there are two almost independent systems in the brain that handle
vision-for-identification and vision-for-action.


That's not exact explanation.
In both cases vision module works good.
Vision-to-identification works fine in both cases.

In this case identified object cannot produce proper actions, because
connection with action module was damaged.

In another case identified object cannot be resolved into language
concept, because connection with language module was damaged.

Agree?


I don't think this works, unfortunately, because that was the first 
simple explanation that people came up with, and it did not match up 
with the data at all.  I confess I do not have time to look this up 
right now.  You wouldn't be able to read one of the latest cognitive 
neuropsychology books (not cognitive neuroscience, note) and let me know 
would you? ;-)



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73608091-c6ef93


Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Richard Loosemore

Vladimir Nesov wrote:

I have a doubt about role of stochastic variance in this parallel
terraced scan as it proceeds in humans (or could proceed with the same
functional behavior in AIs). Could it be that low-level mechanisms are
not that stochastic and just compute a 'closure' of given context?
Closure brings up a specific collection of answer-candidates, and if
they are unsatisfactory or if there is time to contemplate some more,
deliberation level slightly changes a context by introducing
particular bias in it, so that 'closure' gives a different set of
answers.

Effectively, this process is separated on two levels, where low-level
process doesn't work stochastically, and high-level process messes
with initial conditions on low-level process, using some kind of
ad-hoc pseudorandom generation of biases (for example, based on
collection of simple procedures that iterate on available concepts).
It certainly feels this way introspectively, and I'm not sure how it
can be determined experimentally, probably by delays between phases of
this process.


You are asking good questions about the mechanisms, which I am trying to 
explore emprically.  No good answers to this yet, although I have many 
candidate solutions, some of which (I think) look like your above model.


I certainly agree with the sentiment that not *all* of the process can 
be as fluid as the higher level parts (if that is what you are meaning).




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73609008-799dfa


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 If I had 100 of the highest specification PCs on my desktop today (and
 it would be a big desk!) linked via a high speed network this wouldn't
 help me all that much.  Provided that I had the right knowledge I
 think I could produce a proof of concept type AGI on a single PC
 today, even if it ran like a tortoise.  It's the knowledge which is
 mainly lacking I think.

I agree that at the moment hardware is NOT the bottleneck.

This is why, while we've instrumented the Novamente system to
be straightforwardly extensible to a distributed implementation, we
haven't done much actual distributed processing implementation yet.

We have build commercial systems incorporating the NCE in simple
distributed architectures, but haven't gone the distributed-AGI direction
yet in practice -- because, as you say, it seems likely that the key
AGI problems can be
worked out on a single machine, and you can then scale up afterwards.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73609156-15fdf3


Re: [agi] None of you seem to be able ...

2007-12-07 Thread Benjamin Goertzel
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,

 To the extent it is not proprietary, could you please list some of the types
 of parameters that have to be tuned, and the types, if any, of
 Loosemore-type complexity problems you envision in Novamente or have
 experienced with WebMind, in such tuning and elsewhere?

 Ed Porter

A specific list of parameters would have no meaning without a huge
explanation which I don't have time to give...

Instead I'll list a few random areas where choices need to be made, that appear
localized at first but wind up affecting the whole

-- attention allocation is handled by an artificial economy mechanism, which
has the same sorts of parameters as any economic system (analogues of
interest rates,
rent rates, etc.)

-- program trees representing internal procedures are normalized via a set of
normalization rules, which collectively cast procedures into a certain
normal form.
There are many ways to do this.

-- the pruning of (backward and forward chaining) inference trees uses a
statistical bandit problem methodology, which requires a priori probabilities
to be ascribed to various inference steps


Fortunately though in each of the above three
examples there is theory that can guide parameter tning (different theories
in the three cases -- dynamic systems theory for the artificial economy; formal
computer science and language theory for program tree reduction; and Bayesian
stats for the pruning issue)

Webmind AI Engine had too many parameters and too much coupling between
subsystems.  We cast parameter optimization as an AI learning problem but it
was a hard one, though we did make headway on it.  Novamente Engine has much
coupling btw subsystems, but no unnecessary coupling; and many fewer
parameters on which system behavior can sensitively depend.  Definitely,
minimization of the number of needful-of-adjustment parameters is a very key
aspect of AGI system design.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73598324-4bf78b


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore


Mike,

I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering what 
else is there that irrational could possibly mean.  I have named many 
processes that seem to me to fit the irrational definition, but 
without being too clear about it you have declared them all to be just 
rational, so now I have no idea what you can be meaning by the word.



Richard Loosemore


Mike Tintner wrote:
Richard:This raises all sorts of deep issues about what exactly you 
would mean

by rational.  If a bunch of things (computational processes) come
together and each contribute something to a decision that results in
an output, and the exact output choice depends on so many factors coming
together that it would not necessarily be the same output if roughly the
same situation occurred another time, and if none of these things looked
like a rule of any kind, then would you still call it rational?If 
the answer is yes then whatever would count as not rational?


I'm not sure what you mean - but this seems consistent with other 
impressions I've been getting of your thinking.


Let me try and cut through this: if science were to change from its 
prevailing conception of the human mind as a rational, computational 
machine to what I am suggesting - i.e. a creative, compositional, 
irrational machine - we would be talking of a major revolution that 
would impact right through the sciences - and radically extend the scope 
of scientific investigation into human thought. It would be the end of 
the deterministic conception of humans and animals and ultimately be a 
revolution of Darwinian proportions.


Hofstadter  co are absolutely not revolutionaries. Johnson-Laird 
conceives of the human mind as an automaton. None of them are 
fundamentally changing the prevailing conceptions of cognitive science. 
No one has reacted to them with shock or horror or delight.


I suspect that what you are talking about is loosely akin to the ideas 
of some that quantum mechanics has changed scientific determinism. It 
hasn't - the fact that we can't measure certain quantum phenomena with 
precision does not mean that they are not fundamentally deterministic. 
And science remains deterministic.


Similarly, if you make a computer system very complex, keep changing the 
factors involved in computations, add random factors  whatever, you are 
not necessarily making it non-rational. You make it v. difficult to 
understand the computer's rationality, (and possibly extend our 
conception of rationality), but the system may still be basically 
rational, just as quantum particles are still in all probability 
basically deterministic.


As a side-issue, I don't believe that human reasoning, conscious and 
unconscious, is  remotely, even infinitesimally as complex as that of 
the AI systems you guys all seem to be building. The human brain surely 
never seizes up with the kind of complex, runaway calculations that 
y'all have been conjuring up in your arguments. That only happens when 
you have a rational system that obeys basically rigid (even if complex) 
rules.  The human brain is cleverer than that - it doesn't have any 
definite rules for any activities. In fact, you should be so lucky as to 
have a nice, convenient set of rules, even complex ones,  to guide you 
when you sit down to write your computer programs.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73610112-93352e


Re: [agi] None of you seem to be able ...

2007-12-07 Thread Richard Loosemore

Jean-Paul Van Belle wrote:

Interesting - after drafting three replies I have come to realize
that it is possible to hold two contradictory views and live or even
run with it. Looking at their writings, both Ben  Richard know damn
well what complexity means and entails for AGI. Intuitively, I side
with Richard's stance that, if the current state of 'the new kind of
science' cannot even understand simple chaotic systems - the
toy-problems of three-variable differential quadratic equations  and
2-D Alife, then what hope is there to find a theoretical solution for
a really complex system. The way forward is by experimental
exploration of part of the solution space. I don't think we'll find
general complexity theories any time soon. On the other hand,
practically I think that it *is* (or may be) possible to build an AGI
system up carefully and systematically from the ground up i.e.
inspired by a sound (or at least plausible) theoretical framework or
by modelling it on real-world complex systems that seem to work
(because that's the way I proceed too), finetuning the system
parameters and managing emerging complexity as we go along and move
up the complexity scale. (Just like engineers can build pretty much
anything without having a GUT.) Both paradagmatic approaches have
their merits and are in fact complementary: explore, simulate,
genetically evolve etc. from the top down to get a bird's eye view of
the problem space versus incrementally build up from the bottom up
following a carefully chartered path/ridge inbetween the chasms of
the unknown based on a strong conceptual theoretical founding. It is
done all the time in other sciences - even maths! Interestingly, I
started out wanting to use a simulation tool to check the behaviour
(read: fine-tune the parameters) of my architectural designs but then
realised that the simulation of a complex system is actually a
complex system itself and it'd be easier and more efficient to
prototype than to simulate. But that's just because of the nature of
my architecture. Assuming Ben's theories hold, he is adopting the
right approach. Given Richard's assumption or intuitions, he is
following the right path too. I doubt that they will converge on a
common solution but the space of conceivably possible AGI
architectures is IMHO extremely large. In fact, my architectural
approach is a bit of a poor cousin/hybrid: having neither Richard's
engineering skills nor Ben's mathematical understanding I am hoping
to do a scruffy alternative path :)


Interesting thoughts:  remind me, if I forget, that when I get my 
website functioning and can put longer papers into a permanent 
repository, that we all need to have a forward-looking discussion about 
some of the detailed issues that might arise here.  That is, going 
beyond merely arguing about whether or not there is a problem.  I have 
many thoughts about what you say, but no time right now, so I will come 
back to this.


The short version of my thoughts is that we need to look into some of 
the details of what I propose to do, and try to evaluate the possible 
dangers of not taking the path I suggest.




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73591687-f58813


Re: [agi] Solution to Grounding problem

2007-12-07 Thread Richard Loosemore

Dennis Gorelik wrote:

Richard,

It seems that under Real Grounding Problem you mean Communication
Problem.

Basically your goal is to make sure that when two systems communicate
with each other -- they understand each other correctly.

Right?

If that's the problem -- I'm ready to give you my solution.


BTW, I had to read your explanation 3 times to get it [if I got it].
:-)


Don't feel bad:  my explanation was horribly compressed, and not 
necessarily very well articulated, and the actual claim is extremely 
abstract and susceptible to misinterpretation (about 95% of the 
literature on the SGP is a complete misinterpretation!).


I don't think it is quite a communication problem, though.  The issue 
is much more like the error that destroyed that NASA Mars spacecraft 
several years ago (can't remember which one:  they busted so many of 
them).  The one that had one software module calculating in kilometers 
and the other module calculating in miles, so the results passed from 
one to the other became meaningless.


This could be called a communcation problem, but it is internal, and in 
the AGI case it is not so simple as just miscalculated numbers.


So here is a revised version of the problem:  suppose that a system 
keeps some numbers stored internally, but those numbers are *used* by 
the system in such a way that their meaning is implicit in the entire 
design of the system.  When the system uses those numbers to do things, 
the numbers are fed into the using mechanisms in such a way that you 
can only really tell what the numbers mean by looking at the overall 
way in which they are used.


Now, with that idea in mind, now imagine that programmers came along and 
set up the *values* for a whole bunch of those numbers, inside the 
machine, ON THE ASSUMPTION that those numbers meant something that the 
programmers had decided they meant.  So the programmers were really 
definite and explicit about the meaning of the numbers.


Question:  what if those two sets of meanings are in conflict?

This is effectively what the SGP (symbol grounding problem) is all 
about.  Some AI folks start out by building a program in which they 
decide ahead of time what the symbols mean, and they insert a whole 
bunch of actual symbols (AND mechanisms that operate on symbols) into 
the system on the assumption that their chosen meanings are valid.


This becomes a problem because when we say of another person that they 
meant something by their use of a particular word (say cat), what we 
actually mean is that that person had a huge amount of cognitive 
machinery connected to that word cat (reaching all the way down to the 
sensory perception mechanisms that allow the person to recognise an 
instance of a cat, and motor output mechanisms that let them interact 
with a cat).


What Stephen Harnad said in his original paper was Hang on a second: 
if the AI system does not have all that other machinery inside it when 
it uses a word like cat, surely it does not really mean the same 
thing by cat as a person would?


In effect, he was saying that the very limited machinery inside a simple 
AI system will have an *implicit* meaning for cat which is very crude 
because it does not have all that other stuff that we have inside our 
heads, connected to the cat concept.  When you ask the AI Are cats 
fussy? it will only be able to do something crude like see if it has a 
memory item recording a fact about cats and fussiness.  A person on the 
other hand (if they know cats) will be able to deploy a huge amount of 
knowledge about both the [cat] concept and the [fussy] concept, and come 
to a sophisticated conclusion.  What Harnad would say is that the AI 
does not really have the same meaning attached to cat as people do. 
 He then went on to say that the only way to resolve this problem is to 
make sure that the system is connected to the real world so it can pick 
up its own symbols, and only when it has all that real-world connection 
machinery, and building symbols in the way that we do, will the system 
really be able to get the meaning of a word like cat.  Harnad 
summarized that by saying that AI systems need to have their symbols 
grounded in the real world.


Now this is where the confusion starts.  Lots of people heard him 
suggest this, and then thought:  No problem:  we'll attach some video 
cameras and robot arms to our AI and then it will be grounded!


This is a disatrous misunderstanding of the problem.  If the AI system 
starts out with a design in which symbols are designed and stocked by 
programmers, this part of the machine has ONE implicit meaning for its 
symbols . but then if a bunch of peripheral machinery is stapled on 
the back end of the system, enabling it see the world and use robot 
arms, the processing and symbol building that goes on in that
part of the system will have ANOTHER implicit meaning for the symbols. 
There is no reason why these two sets of symbols should have the same 
meaning! 

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner




Matt,:AGI research needs

special hardware with massive computational capabilities.




Could you give an example or two of the kind of problems that your AGI 
system(s) will need such massive capabilities to solve? It's so good - in 
fact, I would argue, essential - to ground these discussions. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73537387-8fc58e


Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Richard Loosemore

Ed Porter wrote:

RICHARD LOOSEMORE= At the cognitive level, on the other hand, there is


a strong possibility that what happens when the mind builds a model of 
some situation, it gets a large nummber of concepts to come together and 
try to relax into a stable representation, and that relaxation process 
is potentially sensitive to complex effects (some small parameter in the 
design of the concepts could play a crucial role in ensuring that the 
relaxation process goes properly, for example)

ED PORTER= Copycat uses a variant of simulated annealing to do its
relaxation process, except it is actually a much more chaotic relaxation
process than many (e.g., much more than Hecht-Neilsen's Confabulation),
because it involves millions of separate codlets being generated to score,
decide the value of, and to add or remove elements from a graph, that labels
grouping and relationships in the initial string, and between the example
initial string and the solution initial string, and between the example
initial string and the example changed string, and between the both the
solution initial string and the example changed string and the solution
changed string, as well as constructing the solution changed string itself
during this process.  


Each of the labelings and mapping links is made by a separate small program
called a codelet.  Codelets are chosen in a weighted random manner.  And one
codelet can clobber the work done by another.  The ratio of importance
between some fitness weighting and pure randomness in the picking of codlets
varyies with temperature, which is a measure of overall labeling, mapping,
and solution fit, which tends to go down over time as the system moves
toward a coherent solution.  But it can go up if the system starts settling
into a solution that creates a mapping or labeling flaw, at which time more
random codelets will be created and randomly change the system, but with the
changes being more likely in the parts of the graph or labeling that have
the least good fit, and thus requires the least energy to kick apart.

Despite this very chaotic process, and the fact this process is sensitive to
complex dynamic effects that enable a slight change of state to causes it to
settle into different solutions, as Richard mentioned above, the weighting
of the system, which varies dynamically in a context sensitive way,  causes
most of the solutions that it settles into to be appropriate, although they
may be quite different.   


For example, for the copycat problem where the goal is to change ijkk in a
manner similar to that in which aabc was changed to produce aabd, which
problem can be represented as

ex  aabc -- aabd
ijkk -- ?

On one thousand runs the results were
# of occurrence   result  temperature
1   612 wereijll29
2   198 wereijkl49
3   121 werejjkk47
4   47  werehjkk19
5   9   werejkkk42
6   6   wereijkd57
7   3   wereijdd46
8   3   wereijkk69
9   1   was djkk58

===EXPLANATION OF ANALOGY IN EACH SOLUTION===
ex-last char in string has alphabet number incremented
1-last set of the same chars in each string had alphabet number incremented
2-last char in each string had alphabet number incremented
3-one end char in each string had alphabet number incremented
4-one end char in each string had alphabet number changed by one
5-set of chars in string had alphabet numbers incremented
6-last char in each string is changed to d
7-last set of same chars in each initial string was changed to d
8-last char in each string had alphabet number changed by a value of zero or
one
9-one char on end of string was changed to d

So you see that each of the changes except solution 8, which had the worst
temperature, meaning the system felt it was the worst fit actually
captured an analogous change.  If temperature were used to filter out the
misfits, none of the runs would have produced a non-analogy.   So despite
the chaotic nature of the system, it almost always settled on a labeling,
graphing, and solution that was appropriate, and when it didn't it knew it
didn't, because of the systems measure of analogical fit.

Although this definitely is a toy problem, it might have as much potential
for complexity as the game of life, in terms of its number of components
(if you count its codlets), its computations, and its non-linearities.  I
was told by somebody who worked with Hofstader that individual copycat
solutions running on unoptimized LISP code on roughly 1990s Sun work
stations normally took between about half hour to a major fraction of a day.


The difference between this and the game of life is that has been designed
to work.  Despite its somewhat chaotic manner of 

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner
Thanks. And I repeat my question elsewhere : you don't think that the human 
brain which does this in say half a second, (right?), is using massive 
computation to recognize that face?


You guys with all your mathematical calculations re the brain's total 
neurons and speed of processing surely should be able to put ball-park 
figures on the maximum amount of processing that the brain can do here.


Hawkins argues:

neurons are slow, so in that half a second, the information entering your 
brain can only traverse a chain ONE HUNDRED neurons long. ..the brain 
'computes' solutions to problems like this in one hundred steps or fewer, 
regardless of how many total neurons might be involved. From the moment 
light enters your eye to the time you [recognize the image], a chain no 
longer than one hundred neurons could be involved. A digital computer 
attempting to solve the same problem would take BILLIONS of steps. One 
hundred computer instructions are barely enough to move a single character 
on the computer's display, let alone do something interesting.


IOW, if that's true, the massive computational approach is surely 
RIDICULOUS - a grotesque travesty of engineering principles of economy, no? 
Like using an entire superindustry of people to make a single nut? And, of 
course, it still doesn't work. Because you just don't understand how 
perception works in the first place.


Oh right... so let's make our computational capabilities even more massive, 
right?  Really, really massive. No, no, even bigger than that?




 Matt,:AGI research needs
 special hardware with massive computational capabilities.


Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I would argue, essential - to ground these discussions.


For example, I ask the computer who is this? and attach a video clip from 
my

security camera.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73724842-42226f


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Mike,
I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering 
what else is there that irrational could possibly mean.  I have 
named many processes that seem to me to fit the irrational 
definition, but without being too clear about it you have declared 
them all to be just rational, so now I have no idea what you can be 
meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally 
any step of

problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, 
because it

will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which 
can incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change 
the rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really 
interested. To me that's ironic, though, because this is absolutely the 
most central issue there is in AGI. But no matter.


No, I am interested, I was just confused, and I did indeed miss the 
above definition (got a lot I have to do right now, so am going very 
fast through my postings) -- sorry about that.


The fact is that the computational models I mentioned (those by 
Hofstadter etc) are all just attempts to understand part of the problem 
of how a cognitive system works, and all of them are consistent with the 
design of a system that is irrational accroding to your above 
definition.  They may look rational, but that is just an illusion: 
every one of them is so small that it is completely neutral with respect 
to the rationality of a complete system.  They could be used by someone 
who wanted to build a rational system or an irrational system, it does 
not matter.


For my own system (and for Hofstadter too), the natural extension of the 
system to a full AGI design would involve


a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

I prefer not to use the term irrational to describe it (because that 
has other connotations), but using your definition, it would be irrational.


There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you 
would think that this is a difficult thing to do?  It is not difficult 
to design a system this way:  some people like the trad-AI folks don't 
do it (yet), and appear not to be trying, but there is nothing in 
principle that makes it difficult to build a system of this sort.





Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73685934-1acb8b


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Mike Tintner

Richard: Mike,
I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that could 
possibly exist in computers are rational, and I am wondering what else is 
there that irrational could possibly mean.  I have named many processes 
that seem to me to fit the irrational definition, but without being too 
clear about it you have declared them all to be just rational, so now I 
have no idea what you can be meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally any 
step of

problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73661748-adcbd5


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 Thanks. And I repeat my question elsewhere : you don't think that the human 
 brain which does this in say half a second, (right?), is using massive 
 computation to recognize that face?

So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and this
will all run on your PC?  Let me know when your program is finished so I can
try it out.

 You guys with all your mathematical calculations re the brain's total 
 neurons and speed of processing surely should be able to put ball-park 
 figures on the maximum amount of processing that the brain can do here.
 
 Hawkins argues:
 
 neurons are slow, so in that half a second, the information entering your 
 brain can only traverse a chain ONE HUNDRED neurons long. ..the brain 
 'computes' solutions to problems like this in one hundred steps or fewer, 
 regardless of how many total neurons might be involved. From the moment 
 light enters your eye to the time you [recognize the image], a chain no 
 longer than one hundred neurons could be involved. A digital computer 
 attempting to solve the same problem would take BILLIONS of steps. One 
 hundred computer instructions are barely enough to move a single character 
 on the computer's display, let alone do something interesting.

Which is why the human brain is so bad at arithmetic and other tasks that
require long chains of sequential steps.  But somehow it can match a face to a
name in 0.5 seconds.  Neurons run in PARALLEL.  Your PC does not.  Your brain
performs 10^11 weighted sums of 10^15 values in 0.1 seconds.  Your PC will
not.


 
 IOW, if that's true, the massive computational approach is surely 
 RIDICULOUS - a grotesque travesty of engineering principles of economy, no? 
 Like using an entire superindustry of people to make a single nut? And, of 
 course, it still doesn't work. Because you just don't understand how 
 perception works in the first place.
 
 Oh right... so let's make our computational capabilities even more massive, 
 right?  Really, really massive. No, no, even bigger than that?
 
 
   Matt,:AGI research needs
   special hardware with massive computational capabilities.
  
 
  Could you give an example or two of the kind of problems that your AGI
  system(s) will need such massive capabilities to solve? It's so good - in
  fact, I would argue, essential - to ground these discussions.
 
 For example, I ask the computer who is this? and attach a video clip from 
 my
 security camera.
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73765756-f02c55


RE: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Ed Porter
Richard, With regard to your below post:

RICHARD LOOSEMORE ###Allowing the system to adapt to the world by
giving it flexible mechanisms that *build* mechanisms (which it then uses),
is one way to get the system to do some of the work of fitting parameters
(as ben would label it), or reducing the number of degrees of freedom that
we have to deal with.

But that would be different from *our* efforts, as designers of the system,
to design different possible mechanisms, then do tests to establish what
kind of system behavior they cause.  We have to do this generate and test
experimentation in parallel with the system's own attempts to adapt and
build new internal mechanisms.  They are two different processes, both of
which are designed to home in on the best design for an AGI, and they do
need to be considered separately.

ED PORTER ### I don't understand in exactly what ways you think the
experientially learned and the designed features should be treated
differently, and how this relates to the potential pitfalls of complexity.

Of course they would normally be considered differently (you have to
directly design one, the other is learned automatically by a system you
design).  I think there needs to be joint development of them, because the
designed mechanisms are intended to work with the learned ones, and vice
versa.

In the system I have been thinking of, most of the experientially learned
patterns are largely drawn from, or synthesized from, recorded experience in
a relatively direct manner, not from some sort of Genetic Algorithm that
searches large spaces to find some algorithm which compactly represents
large amounts of experiences.  This close connection with sensed, behaved,
or thought experience tends to make such systems more stable.

But it is not clear to me that all experientially learned things are
necessarily more safe than designed things.  For example, Novamente uses
MOSES, which is a genetic algorithm learning tool.  I think such a tool is
not directly needed for an AGI and probably has no direct analogy in the
brain. I think the brain uses something that is a rough combination of
copycats type of relaxation type assembly, with something like the
superimposed probabilities of hecht-neilsen's confabulation to explore new
problem spaces, and that this process is repeated over and over again when
trying to solve complex problems with the various good features of
successive attempts being remembered as part of an increasing learned
vocabulary of patterns from which new synthesis are more likely to be
performed (all of which is arguably an analog of GA.  

I can, however, understand how a Genetic algorithm like MOSES could add
tremendous learning, exploratory, and perhaps even representational power to
an AGI, particularly for certain classes of problems.  BUT I HAVE LITTLE
UNDERSTANDING FOR EXACTLY WHAT TYPE OF COMPLEXITY DANGERS SUCH GENETIC
ALGORITHM PRESENTS.  GAs have been successfully used for multiple purposes,
particularly where one has a clearly defined and measurable fitness
function.  But it is not clear to me what happens if you use GAs to control
an AGI's relatively high levels of behavior in a complex environment for
which there would often not be any simply applicable fitness function.  Nor
is it clear to me what happens if you have large number of GA controlled
systems interacting with each other.  

It would seem to me they would have much more potential for knarliness than
my more experientially based learning systems, but I really don't know.

Ben would probably know much more about this than most. 
 

RICHARD LOOSEMORE ###The other major comment that I have is that the
*main* strategy that I have for reducing the number of degrees of freedom
(in the design) is to keep the design as close as possible to the human
cognitive system.

This is where my approach and the Novamente approach part company in a
serious way.  I believe that the human design has already explored the space
of possible solutions for us (strictly speaking it is evolution that did the
exploration when it tried out all kinds of brain edsigns over the eons).  I
believe that this will enable us to drastically reduce the number of
possibilities we have to explore, thus making the project feasible.

My problem is that it may be tempting to see a ground-up AGI design (in
which we just get a little inspiration from the human system, but mostly we
ignore it) as just as feasible when in fact it may well get bogged down in
dead ends within the space of possible AGI designs.

ED PORTER ### You might be right, you might be wrong.  It is my
intuition that you do not need to reverse engineer the human brain to build
AGI's.  I think some of the types of design mistakes you envision from not
waiting until we get the whole picture on how the brain works, will
probability require some significant software revisions, but such revisions
are common in development of complex systems of a new type.  I think we

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-07 Thread Jean-Paul Van Belle
Hi Matt, Wonderful idea, now it will even show the typical human trait of 
lying...when i ask it do you still love me? most answers in its database will 
have Yes as an answer  but when i ask it 'what's my name?' it'll call me John?

However, your approach is actually already being implemented to a certain 
extent. Apparantly (was it newsweek, time?) the No 1 search engine in 
(Singapore? Hong Kong? Taiwan? - sorry I forgot) is *not* Google but a local 
language QA system that works very much the way you envisage it (except it 
collects the answers in its own SAN i.e. not distributed over the user machines)

=Jean-Paul
 On 2007/12/07 at 18:58, in message
 [EMAIL PROTECTED], Matt Mahoney
 [EMAIL PROTECTED] wrote:
 
 Hi Matt
 
 You call it an AGI proposal but it is described as a distributed search
 algorithms that (merely) appears intelligent i.e. design for an
 Internet-wide message posting and search service. There doesn't appear to
 be any grounding or semantic interpretation by the AI system? How will it
 become more intelligent?

Turing was careful to make no distinction between being intelligent and
appearing intelligent.  The requirement for passing the Turing test is to be
able to compute a probability distribution P over text strings that varies
from the true distribution no more than it varies between different people. 
Once you can do this, then given a question Q, you can compute answer A that
maximizes P(A|Q) = P(QA)/P(Q).

This does not require grounding.  The way my system appears intelligent is by
directing Q to the right experts, and by being big enough to have experts on
nearly every conceivable topic of interest to humans.

A lot of AGI research seems to be focused on how to represent knowledge and
thought efficiently on a (much too small) computer, rather than on what
services the AGI should provide for us.

-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73912948-7bb204

Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Dougherty
On Dec 7, 2007 7:41 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:

  No, my proposal requires lots of regular PCs with regular network
 connections.

 Properly connected set of regular PCs would usually have way more
 power than regular PC.
 That makes your hardware request special.
 My point is - AGI can successfully run on singe regular PC.
 Special hardware would be required later, when you try to scale
 out working AGI prototype.


I believe Matt's proposal is not as much about the exposure to memory or
sheer computational horsepower - it's about access to learning experience.
A supercomputer atop an ivory tower (or in the deepest government
sub-basement) has an immense memory and speed (and dense mesh of
interconnects, etc., etc.) - but without interaction from outside itself,
it's really just a powerful navel-gazer.

Trees do not first grow a thick trunk and deep roots, then change to growing
leaves to capture sunlight.  As I see it, each node in Matt's proposed
network enables IO to the us [existing examples of intelligence/teachers].
Maybe these nodes can ask questions, What does my owner know of A? - the
answer becomes part of its local KB.  Hundreds of distributed agents are now
able to query Matt's node about A (clearly Matt does not have time to answer
500 queries on topic A)  During the course of processing the local KB on
topic A, there is a reference to topic B.  Matt's node automatically queries
every node that previously asked about topic A (seeking first likely
authority on the inference)  - My node asks me, What do you know of B?  Is
A-B?  I contribute to my node's local KB, and it weights the inference for
A-B.  This answer is returned to Matt's node (among potentially hundreds of
other relative weights) and Matt's node strengthen the A-B inference based
on received responses.  At this point, the distribution of weights for A-B
are all over the network depending on the local KB of each node and the
historical traffic of query/answer flow.   After some time, I ask my node
about topic C.  It knows nothing of topic C, so it asks me directly to
deposit information to the local KB (initial context) - through the course
of 'conversation' with other nodes, my answer comes back as the aggregate of
the P2P knowledge within a query radius.  On a simple question I may only
allow 1 hour of think time, for a deeper research project that radius of
query may be allowed to extend 2 weeks of interconnect.  During my research,
my node will necessarily become interested in topic C - and will likely
become known among the network as the local expert.  (local expert for a
topic would be a useful designation to weigh each node for primary query
targets as well as 'trusting' the weight of the answers from each node)

I don't think this is vastly different from how people (as working examples
of intelligence nodes) gather knowledge from peers.

Perhaps this approach to intelligence is not an absolute definition as
much as a best effort/most useful answer to date intention.  Even if this
schema does not extend to emergent AGI, it builds a useful infrastructure
that can be utilized by currently existing intelligences as well as whatever
AGI does eventually come into existence.

Matt, is this coherent with your view or am I off base?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73898638-6a4fad

Re: Re[2]: [agi] Interpreting Brain damage experiments

2007-12-07 Thread Vladimir Nesov
Hippocampus damage and resulting learning deficiencies are very
interesting phenomena. They probably show how important high-level
control of learning is in efficient memorization, particularly in
memorization of regularities that are presented only few times (or
just once, as in the case of episodic memories) and are successfully
memorized by healthy people but not by people with damaged
hippocampus. People with damaged hippocampus are still able to
memorize regularities that pass sufficiently many times through their
perception (which is how low-level subsystems probably learn
normally). They can compensate for regularities that they can
deliberatively recite, like text, but not whole episodic memories.

It shows a limitation of Hebbian learning, of balance between
gathering information about regularity and applying it to reinforce
the regularity, and of importance of high-level mechanism that is able
to compensate for this property. This I think can be a useful
observation for AGI design.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73897738-7ea5fd


[agi] High-level brain design patterns

2007-12-07 Thread Dennis Gorelik
Derek,

 Low level design is not critical for AGI. Instead we observe high level brain
 patterns and try to implement them on top of our own, more understandable,
 low level design.
   
  I am curious what you mean by high level brain patterns
 though.  Could you give an example?

1) All dependencies we may observe between inputs or outputs.
For example, conditional reflex and unconditional reflex.

2) Activation of neuron A that happens _consistently_ with activation
of neuron B.

3) Richard Loosemore already gave his example:
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
For example, much to our surprise we might see waves in the U values.
And every time two waves hit each other, a vortex is created for
exactly 20 minutes, then it stops.
  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73895601-d02434

Re[2]: [agi] Interpreting Brain damage experiments

2007-12-07 Thread Dennis Gorelik
Richard,

Let's save both of us time and wait when somebody else read this
Cognitive Science book and will come here to discuss it.
:-)

Though interesting, interpreting Brain damage experiments is not the
most important thing for AGI development.

 In both cases vision module works good.
 Vision-to-identification works fine in both cases.
 
 In this case identified object cannot produce proper actions, because
 connection with action module was damaged.
 
 In another case identified object cannot be resolved into language
 concept, because connection with language module was damaged.
 
 Agree?

 I don't think this works, unfortunately, because that was the first 
 simple explanation that people came up with, and it did not match up
 with the data at all.  I confess I do not have time to look this up 
 right now.  You wouldn't be able to read one of the latest cognitive
 neuropsychology books (not cognitive neuroscience, note) and let me know
 would you? ;-)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73890290-7fa0bf


Re[2]: [agi] Solution to Grounding problem

2007-12-07 Thread Dennis Gorelik
Richard,


 This could be called a communcation problem, but it is internal, and in
 the AGI case it is not so simple as just miscalculated numbers.

Communication between subsystems is still communication.
So I suggest to call it Communication problem.


 So here is a revised version of the problem:  suppose that a system
 keeps some numbers stored internally, but those numbers are *used* by
 the system in such a way that their meaning is implicit in the entire
 design of the system.  When the system uses those numbers to do things,
 the numbers are fed into the using mechanisms in such a way that you
 can only really tell what the numbers mean by looking at the overall
 way in which they are used.

That's right approach of doing things. Concepts gaining meaning
by connecting to other concepts.
The only exception - concepts that are directly connected to
hardcoded sub-systems (dictionary, chat client, web browser, etc).
Such directly connected concepts would have some predefined meaning.
This predefined meaning would be injected by AGI programmers.


 Now, with that idea in mind, now imagine that programmers came along and
 set up the *values* for a whole bunch of those numbers, inside the 
 machine, ON THE ASSUMPTION that those numbers meant something that the
 programmers had decided they meant.  So the programmers were really 
 definite and explicit about the meaning of the numbers.

 Question:  what if those two sets of meanings are in conflict?

How could they be in conflict, if one set is predefined, and another
set gained meaning from predefined set?

If you are talking about inconsistencies within predefined set --
that's problem of design  development team.
Do you want to address this problem?
So far I can suggest one tip: keep the set of predefined concept as
small as possible.
Most of mature AGI intelligence should come from concepts (and their
relations) acquired during system life time.


 If the AI system starts out with a design in which symbols are
 designed and stocked by 
 programmers, this part of the machine has ONE implicit meaning for its
 symbols . but then if a bunch of peripheral machinery is stapled on
 the back end of the system, enabling it see the world and use robot 
 arms, the processing and symbol building that goes on in that
 part of the system will have ANOTHER implicit meaning for the symbols.
 There is no reason why these two sets of symbols should have the same
 meaning!

Here's my understanding of your problem:
We have an AGI, and now we want to extend it by adding new module.
We afraid that new module will have problems communicating with other
modules, because the meaning of some symbols is different.

If I understood your correctly, here're two solutions:

Solution #1: Connect modules through Neural Net.

Under Neural Net I mean set of concepts (nodes) connected with other
concepts by relations.
Concepts can be created and deleted dynamically.
Relations can be created and deleted dynamically.
When we connect new module to the system - it will introduce its own
concepts into Neural Net.
Initially these concepts are not connected with existing concepts.
But then some process will connect these new concepts with existing
concepts.
One example of such process could be: if concepts are active at the
same time -- connect them.
There could be other possible connecting processes.
In any case, eventually system would connect all new concepts, and
that connections would define how input from new module is interpreted
by the rest of the system.

Solution #2: Connect new module into another hardcoded modules
directly.
In this case it's responsibility of AGI development team to make sure
that both hardcoded modules talk the same language.
That's typical module integration task for developers.



 In fact, it turns out (when you think about it a little 
 longer) that all of the problem has to do with the programmers going in
 and building any symbols using THEIR idea of what the symbols should
 mean:  the system has to be allowed to build its own symbols from the
 ground up, without us necessarily being able to interpret those symbols
 completely at all.  We might nevcer be able to go in and look at a 
 system-built symbol and say That means [x], because the real meaning
 of that symbol will be implicit in the way the system uses it.

 In summary:  the symbol grounding problem is that systems need to have
 only one interpretation of their symbols,

Not sure what you mean by one interpretation.
Symbol can be have multiple interpretations in different contexts.
Our goal is to make sure that different systems and different modules
has ~same understanding of the symbols at the time of communication.
(Under symbols here I mean data that is passed through interfaces)

 and it needs to be the one built by the system itself as a result of
 a connection to the external world.

So it seems you already have a solution (I propose the same solution)
to the Real Grounding Problem.

Can 

Re: [agi] AGI communities and support

2007-12-07 Thread Vladimir Nesov
On Dec 8, 2007 2:10 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 The Russians have traditionally had more than their share of math whizzes,
 so I am surprised there isn't more interest in this subject there.

 I don't understand I wonder where your question has a positive answer and
 how it can look like.

 Perhaps you mean, you wonder where one would be able to positively answer
 such a question.  The answer to that is that I know of no place that is
 funding AGI, proper, like you think they would.  The US government is
 funding a fair amount of traditional AI, but not yet real AGI.


Yes, it's what I mean. But communities can exist irrespective of
funding, like this one and in previous years on SL4. I know of no
other online community that is focused on AGI (although I speak only
Russian and English, so there can be some in other languages;
Japanese, anyone?)


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73868031-fb0782


RE: [agi] AGI communities and support

2007-12-07 Thread Ed Porter
Vlad,

The Russians have traditionally had more than their share of math whizzes,
so I am surprised there isn't more interest in this subject there.

I don't understand I wonder where your question has a positive answer and
how it can look like.  

Perhaps you mean, you wonder where one would be able to positively answer
such a question.  The answer to that is that I know of no place that is
funding AGI, proper, like you think they would.  The US government is
funding a fair amount of traditional AI, but not yet real AGI.  

Of course we do not know what might be being done in secret in various
countries' militaries, intelligence services, or corporations.  We might
wake up tomorrow to find out the Google already has one up and running.  You
never know.

Ed Porter


-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 5:35 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI communities and support

On Dec 8, 2007 1:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 What country are you in?

 And what is the level of web-comunity, academic, commercial, and
 governmental support AGI in your country?

 Ed Porter


I live in Moscow. AGI-related activities are nonexistent here; there's
a small web community, but I don't follow its discussions, archives
for recent years don't show anything interesting.

I wonder where your question has a positive answer and how it can look like.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73864995-76310d

RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Mike Tintner # Yes, I understood that (though sure, I'm capable of
misunderstanding anything here!) 

ED PORTER # Great, I am glad you understood this.  Part of what you
said indicated you did.  BTW, we are all capable of misunderstanding things.

 

Mike Tintner # Hawkins' basic point that the brain isn't a computer
at all -  which I think can be read less controversially as is a machine
that works on very fundamentally different principles to those of currently
programmed computers - especially when perceiving objects -  holds.

 

You're not dealing with that basic point, and I find it incredibly difficult
to get anyone here squarely to face it. People retreat into numbers and
millions.

 

ED PORTER # I think most of us understand that and are not disputing
it.  A Novamente-like approach to AGI is actually quite similar to Hawkins'
in many way.  For example, it uses hierarchical representation. So few of us
are talking about Old Fashioned AI as the major architecture for our systems
(although OFAI has its uses in certain areas).

 

Mike Tintner # P.S. You also don't answer my question re: how many
neurons  in total *can* be activated within a half second, or given period,
to work on a given problem - given their relative slowness of communication?
Is it indeed possible for hundreds of millions of messages about that one
subject to be passed among millions of neurons in that short space
(dunno-just asking)? Or did you pluck that figure out of the air?

 

ED PORTER # I was not aware I had been asked this question.  

 

If you are asking where I got the
it-probably-takes-hundreds-of-millions-of-steps-to-recognize-a-face, I was
sort of picking it out of the air, but I don't think it is unreasonable
pick.  I was including each synaptic transmission as a step.  Assume the
average neuron has roughly 1K active synapses (some people say several
thousand some say only about 100), and lets say an active cell fires at
least ten times during the 100 step process, and since you assume 100 levels
of activation, that would only be assuming an average of 100 neurons
activated on average at each of your 100 levels, which is not a terribly
broad search.  If a face were focused on, so that it took up just the size
of your thumbnail with you thumb sticking up with your arm extended fully in
front of your that would activate a portion of your foviated retna having a
resolution of roughly 10k pixels (if I recollect correctly from a
conversation with Tomaso Poggio).  Presumably this would include 3 color
inputs, a BW input, and with mangocellur and pravocellur inputs from each
eye, so you may well be talking about 100k neurons actived at just the V1
level.  If each has 1K neurons firing 10 times, that's 10k x 100K or 100M
synaptic firings right there, in just one of your 100 steps.  Now some of
that activation might be filtered out by the thalamus, but then you would
have to include all its activations used for such filtering, which according
to Stephen Grossberg involves multi-level activations in the
cortico-thalamic feedback loop, which probably would require roughly at
least 100M synaptic activations.  And when you recognize a face you normally
are seeing it substantially larger than your thumb nail at its furthest
extension from your face.  If you saw it as large as the length of your
entire thumb, rather than just your thumbnail, it would project on to about
10 times as many neurons in your thalamus and V1.  So, yes, I was guessing,
but I think hundreds of millions of steps. AKA synaptic activations, was a
pretty safe guess. 

 

Ed Porter

 

 

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 5:08 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?

 

ED PORTER # When you say It only takes a few steps to retrieve
something from memory. I hope you realize that depending how you count
steps, it actually probably takes hundreds of millions of steps or more.  It
is just that millions of them are performed in parallel, such that the
longest sequence of any one causal path among such steps is no longer than
100 steps.  That is a very, repeat very, different thing that suggesting
that only 100 separate actions were taken.  

 

Ed,

 

Yes, I understood that (though sure, I'm capable of misunderstanding
anything here!) But let's try and make it simple and as concrete as possible
- another way of putting Hawkins' point, as I understand,  is that at any
given level, if the brain is recognising a given feature of the face, it can
only compare it with very few comparable features in that half second with
its 100 operations  - whereas a computer will compare that same feature with
vast numbers of others.

 

And actually ditto, for that useful Hofstadter example you quoted, of
proceeding from aabc: aabd  to jjkl: ???  (although this is a somewhat more
complex operation which may take a couple of seconds 

Re: [agi] AGI communities and support

2007-12-07 Thread Bob Mottram
AGI related activities everywhere are minimal right now.  Even people
interested in AI often have no idea what the term AGI means.  The
meme hasn't spread very far beyond a few technologists and
visionaries.  I think it's only when someone has some amount of
demonstrable success with an AGI system that things will really begin
to move.



On 07/12/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Dec 8, 2007 1:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
  Vlad,
 
  What country are you in?
 
  And what is the level of web-comunity, academic, commercial, and
  governmental support AGI in your country?
 
  Ed Porter
 

 I live in Moscow. AGI-related activities are nonexistent here; there's
 a small web community, but I don't follow its discussions, archives
 for recent years don't show anything interesting.

 I wonder where your question has a positive answer and how it can look like.


 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73856876-ce6de8


RE: [agi] Complexity in AGI design

2007-12-07 Thread Derek Zahn
Dennis Gorelik writes: Derek,  I quoted this Richard's article in my blog: 
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
Cool.  Now I'll quote your blogged response:
 
 So, if low level brain design is incredibly complex - how do we copy it? The 
 answer is: we don't copy low level brain design. Low level design is 
 critical for AGI. Instead we observe high level brain 
 patterns and try to implement them on top of our own, more understandable, 
 low level design.
 
I'm not sure for myself what I think of this complexity argument, so I don't 
have anything to say about your answer except to wish you luck (if Richard is 
right, you'll need a lot of it; if many paths lead up the hill then you might 
not need much at all).
 
I am curious what you mean by high level brain patterns though.  Could you 
give an example?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73825873-cc7440

Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Vladimir Nesov
On Dec 7, 2007 10:54 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 So, as I understand you, you are basically agreeing with me.  Is this
 correct?

 Ed Porter

I agree that high-level control allows more chaos at lower level, but
I don't think that copycat-level stochastic search is necessary or
even desirable.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73812575-6ea12f


RE: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Ed Porter
Vlad,

So, as I understand you, you are basically agreeing with me.  Is this
correct?

Ed Porter

-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 2:24 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Evidence complexity can be controlled by guiding hands

On Dec 7, 2007 7:42 PM, Ed Porter [EMAIL PROTECTED] wrote:

 Yes, there would be a tremendous number of degrees of freedom, but there
 would be a tremendous number of sources of guidance and review from the
best
 matching prior experiences of the past successes and failures of the most
 similar perceptions, thoughts, or behaviors in the most similar contexts.
 With such guidance, there is reason to believe that even a system large
 enough to compute human-level world knowledge would stay largely within
the
 realm of common sense and not freak out.  It should have enough randomness
 to fairly often think strange new thoughts, but it should have enough
 common-sense from its vase experiences to judge roughly as well as a human
 when to, and when not to, act on such strange new ideas.

Ed,

I believe that high-level control is instrumental not only to
deliberation-level decision-making, but to very formation of system's
low-level knowledge and behavior. Hebbian learning needs sufficient
time to collect evidence before starting to reliably activate
inferential link, lest it risks to disturb system's dynamics and
create a bias with positive feedback. It makes fast learning and
learning from few examples problematic. Learning can work much faster
if it's assisted by recitation loops, which are triggered only for
regularities that are deemed reasonable by higher-level processes
(these processes can be just of slightly more higher level, I'm not
talking about overall control).

Also mechanism you describe is why I think it's OK to activate
everything that activates: higher-level control (based on regeneration
of typical patterns in recitation loops) should remove nonsense and at
the same time teach system not to produce it again.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73798921-cd6358

RE: [agi] Solution to Grounding problem

2007-12-07 Thread Derek Zahn
Richard Loosemore writes: This becomes a problem because when we say of 
another person that they  meant something by their use of a particular word 
(say cat), what we  actually mean is that that person had a huge amount of 
cognitive  machinery connected to that word cat (reaching all the way down 
to the  sensory perception mechanisms that allow the person to recognise an  
instance of a cat, and motor output mechanisms that let them interact  with a 
cat).  What Stephen Harnad said in his original paper was Hang on a second: 
 if the AI system does not have all that other machinery inside it when  it 
uses a word like cat, surely it does not really mean the same  thing by 
cat as a person would?

 [...]
 
Thanks, Richard.  That post was a terrific bit of writing.
 
On a related note, I think those that are uneasy with the idea of grounding 
symbols in experience with a virtual world wonder whether the (current) thin 
and skewed sensory experiece of cats or any other concept-friendly 
regularities in such worlds are sufficiently similar to provide enough of the 
same meaning for communication with humans using the resulting concepts.
 
For that matter, one wonders even when concepts are grounded in the real world 
whether the resulting concepts and their meanings can be similar enough for 
communication if the concept formation machinery is not quite similar to our 
own sometimes even individual human conceptualizations are barely similar 
enough to allow conversation.
 
Very interesting stuff.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73797095-e37936

Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Vladimir Nesov
On Dec 7, 2007 7:42 PM, Ed Porter [EMAIL PROTECTED] wrote:

 Yes, there would be a tremendous number of degrees of freedom, but there
 would be a tremendous number of sources of guidance and review from the best
 matching prior experiences of the past successes and failures of the most
 similar perceptions, thoughts, or behaviors in the most similar contexts.
 With such guidance, there is reason to believe that even a system large
 enough to compute human-level world knowledge would stay largely within the
 realm of common sense and not freak out.  It should have enough randomness
 to fairly often think strange new thoughts, but it should have enough
 common-sense from its vase experiences to judge roughly as well as a human
 when to, and when not to, act on such strange new ideas.

Ed,

I believe that high-level control is instrumental not only to
deliberation-level decision-making, but to very formation of system's
low-level knowledge and behavior. Hebbian learning needs sufficient
time to collect evidence before starting to reliably activate
inferential link, lest it risks to disturb system's dynamics and
create a bias with positive feedback. It makes fast learning and
learning from few examples problematic. Learning can work much faster
if it's assisted by recitation loops, which are triggered only for
regularities that are deemed reasonable by higher-level processes
(these processes can be just of slightly more higher level, I'm not
talking about overall control).

Also mechanism you describe is why I think it's OK to activate
everything that activates: higher-level control (based on regeneration
of typical patterns in recitation loops) should remove nonsense and at
the same time teach system not to produce it again.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73782244-94fe20


RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Bob,

I agree.  I think we should be able to make PC based AGI's.  With only about
50 million atoms they really wouldn't bea ble to have much world knowledge,
but they should be able to understand, say the world of a simple video game,
such as pong or PacMan.

As Richard Loosemore and I have just discussed in our last several emails on
the  Evidence complexity can be controlled by guiding hands thread, to
achieve powerful AGI's we will need very large complex systems and we need
to start experimenting with how to control the complexity of such larger
systems.

So building AGI's on a PC is a good start, which will hopefully start
happening ofter OpenCog comes out, but we also need to start building and
exploring larger system.  It is my very rough guess that human level 
AGI will need within several orders of magnitude of 10TBytes of RAM or
approximately as fast memory, 10T random RAM accesses/sec, and global x
sectional bandwidth of 100G 64 Byte messages/sec.  So you won't have that on
your desktop any time soon.  

But in twenty years you might.

We should be exploring constantly bigger and bigger machines between a PC
AGI and human level AGI's to learn more and more about the problem of
scaling up large systems.

Ed Porter

-Original Message-
From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 10:21 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?

If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much.  Provided that I had the right knowledge I
think I could produce a proof of concept type AGI on a single PC
today, even if it ran like a tortoise.  It's the knowledge which is
mainly lacking I think.

Although I do a lot of stuff with computer vision I find myself not
being all that restricted by computational limitations.  This
certainly wasn't the case a few years ago.  Generally even the lowest
end hardware these days has enough compute power to do some pretty
sophisticated stuff, especially if you include the GPU.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73637520-930b42

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney

--- Dennis Gorelik [EMAIL PROTECTED] wrote:

 Matt,
 
  For example, I disagree with Matt's claim that AGI research needs
  special hardware with massive computational capabilities.
 
  I don't claim you need special hardware.
 
 But you claim that you need massive computational capabilities
 [considerably above capabilities of regular modern PC], right?
 That means special.

No, my proposal requires lots of regular PCs with regular network connections.
 It is a purely software approach.  But more hardware is always better. 
http://www.mattmahoney.net/agi.html


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73635143-725e61


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
 Clearly the brain works VASTLY differently and more efficiently than current
 computers - are you seriously disputing that?

It is very clear that in many respects the brain is much less efficient than
current digital computers and software.

It is more energy-efficient by and large, as Read Montague has argued ...
but OTOH sometimes it is wy less algorithmically efficient

For instance, in spite of its generally high energy efficiency, my brain wastes
a lot more energy calculating 969695775755/ 8884 than my computer
does.

And e.g. visual cortex, while energy-efficient, is horribly algorithmically
inefficient, involving e.g. masses of highly erroneous motion-sensing neurons
whose results are averaged together to give reasonably accurate values..

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73893310-401039


Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

 No, my proposal requires lots of regular PCs with regular network connections.

Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is - AGI can successfully run on singe regular PC.
Special hardware would be required later, when you try to scale
out working AGI prototype.

  It is a purely software approach.  But more hardware is always better.

Not always.
More hardware costs money and requires more maintenance.

 http://www.mattmahoney.net/agi.html



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892920-985965


[agi] Worst case scenario

2007-12-07 Thread Bryan Bishop

Here's the worst case scenario I see for ai: that there has to be 
hardware complexity to the extent that generally nobody is going to be 
able to get the initial push. Indeed, there's Moore's law to take 
account of, but the economics might just prevent us from accumulating 
enough nodes, enough connections, and so on.

So, worst case, maybe some gazillionair will have to purchase/make his 
own semiconductor manufacturing facility and have it completely devoted 
to building additional microprocessors to add to a giant cluster, 
supercomputer, or computation cloud, whatever you want to call it.

A first step on the way to such a setup might be purchasing 
supercomputer time and trying to wire up a few different supers, then 
trying to see if even a percentage of the computational power predicted 
yields results remotely ressembling ai.

Over time, ai will improve and so the semiconductor facility can recover 
costs by hosting a very large digital work force, but this is all or 
nothing and so what arguments might there be to persuade a 
gazillionair into doing this?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73878310-b41ab6


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bryan Bishop
On Friday 07 December 2007, Mike Tintner wrote:
 P.S. You also don't answer my question re: how many neurons  in total
 *can* be activated within a half second, or given period, to work on
 a given problem - given their relative slowness of communication? Is
 it indeed possible for hundreds of millions of messages about that
 one subject to be passed among millions of neurons in that short
 space (dunno-just asking)? Or did you pluck that figure out of the
 air?

I suppose that the number of neurons that are working on a problem at a 
moment will have to expand exponentially based on the number of 
synaptic connections per neuron as well as the number of hits/misses 
per neuron that are receiving the signals, viewed as if an expanding 
light-cone sphere in the brain (it's of course, a neural activity 
cone / sphere, not light). I am sure this rate can be made into a 
model. 

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73877309-9727c9

Re: [agi] AGI communities and support

2007-12-07 Thread Bob Mottram
 The robotics revolution is already happening. Presumably, as some kind of
 roboticist, you would agree?


The robotics revolution has already happened.  There has been a quiet
revolution in some manufacturing industries with large amounts of
human labour being replaced by automation.  However, this isn't
something which most people ever see or are aware of and it doesn't
catch media attention since this is mostly dull, repetitive and
unglamourous work.  Much of our modern lifestyles with cheap consumer
goods is actually supported and enabled by robotic labour.  Cheap
human labour remains competitive, but there will come a time within
the next few decades when no human labour - however inexpensive - will
be able to compete economically against automated factories.

However, this is only one largely unseen revolution.  The next
robotics revolution is yet to begin, and this will see another wave of
automation moving out of factories and into homes and offices.  Don't
be fooled by the showy humanoids that you might see strutting around
or playing violins.  I very much doubt that consumer robotics is going
to look like this.  It's going to be far more utilitarian.  Think
Roomba rather than ASIMO.  This revolution will begin once there is
some cheap and easy way of taking regular PC hardware and making it
mobile by adding legs, wheels and arms.  At the moment doing this
involves a good deal of expertise, and we're waiting for off-the-shelf
standardised systems to replace the current perpetual re-inventions of
the wheel.  Once you have standards and the price is right then the
vast pool of IT developers who previously had no involvement with
robots will be able to apply their expertise to robotics problems.

If you're interested in what needs to be done before the second wave
can begin see Matt Trossen's recent talk.

http://www.trossenrobotics.com/tutorials/trossenroboticssystem.aspx

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73876643-6ac08c


Re: [agi] AGI communities and support

2007-12-07 Thread Vladimir Nesov
On Dec 8, 2007 1:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 What country are you in?

 And what is the level of web-comunity, academic, commercial, and
 governmental support AGI in your country?

 Ed Porter


I live in Moscow. AGI-related activities are nonexistent here; there's
a small web community, but I don't follow its discussions, archives
for recent years don't show anything interesting.

I wonder where your question has a positive answer and how it can look like.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73849598-ab4225


Re: [agi] Solution to Grounding problem

2007-12-07 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore writes:

  This becomes a problem because when we say of another person that they
  meant something by their use of a particular word (say cat), what we
  actually mean is that that person had a huge amount of cognitive
  machinery connected to that word cat (reaching all the way down to the
  sensory perception mechanisms that allow the person to recognise an
  instance of a cat, and motor output mechanisms that let them interact
  with a cat).
 
  What Stephen Harnad said in his original paper was Hang on a second:
  if the AI system does not have all that other machinery inside it when
  it uses a word like cat, surely it does not really mean the same
  thing by cat as a person would?
 
  [...]
 
Thanks, Richard.  That post was a terrific bit of writing.
 
On a related note, I think those that are uneasy with the idea of 
grounding symbols in experience with a virtual world wonder whether 
the (current) thin and skewed sensory experiece of cats or any other 
concept-friendly regularities in such worlds are sufficiently similar to 
provide enough of the same meaning for communication with humans using 
the resulting concepts.
 
For that matter, one wonders even when concepts are grounded in the real 
world whether the resulting concepts and their meanings can be similar 
enough for communication if the concept formation machinery is not quite 
similar to our own sometimes even individual human 
conceptualizations are barely similar enough to allow conversation.


That is a very good point, and one to which I don't have a ready answer.

This question will attract a good deal of attention when we get nearer 
to the point of being able to test real candidate AGI systems.


It is another reason to stay close to the human design, I believe.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73905368-2fdc72


Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 
 
 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions. 

 For example, I ask the computer who is this? and attach a video clip from my
 security camera.


Why do you need image recognition in your AGI prototype?
You can feed it with text. Then AGI would simply parse text [and
optionally - Google it].

No need for massive computational capabilities.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892756-356b26


Re[4]: [agi] Solution to Grounding problem

2007-12-07 Thread Dennis Gorelik
Mike,

 1. Bush walks like a cowboy, doesn't he?
 The only way a human - or a machine - can make sense of sentence 1 is by
 referring to a mental image/movie of Bush walking.

That's not the only way to make sense of the saying.
There are many other ways: chat with other people, or look on Google:
http://www.google.com/search?q=Bush+walks+cowboy


http://images.google.com/images?q=grundchen

 Merely referring to more words won't cut it.

It would. Meaning - is connection between concepts.
If proper words are referred, then meaning is there.


 Oh,  just to make your day, if you don't have a body, you can't understand
 the images either

How is that don't have a body remark relevant?
Computers have body and senses (such as keyboard and Internet connection).

 Is all that clear?

No.
You didn't describe what grounding problem is about.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73839347-3bb442


RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Mike,

MIKE TINTNER # Hawkins' point as to how the brain can decide in a
hundred steps what takes a computer a million or billion steps (usually
without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

ED PORTER # When you say It only takes a few steps to retrieve
something from memory. I hope you realize that depending how you count
steps, it actually probably takes hundreds of millions of steps or more.  It
is just that millions of them are performed in parallel, such that the
longest sequence of any one causal path among such steps is no longer than
100 steps.  That is a very, repeat very, different thing that suggesting
that only 100 separate actions were taken.  

You many already know and mean this, but from a quick read of your argument
it was not clear you did.

So I don't know which side of the Do we need massive computational
capabilities? you are on, but we do need massive computational
capabilities.  That 100 step task you referred, which often involves
recognizing a person at a different scale, angle, body position, facial
expression, and lighting, than we have seen them before, would probably
require many hundreds of millions of neuron to neuron messages in the brain,
and many hundreds of millions of computations in a computer.

I hope you realize that Hawkin's theory of Hierarchical memory means that
images are not stored as anything approaching photographs or drawings.  They
are stored as distributed hierarchical representations, in which a match
would often require parallel computing involving matching and selection at
multiple different representational levels.  The answer is not retrieved
from memory by any simple process, like vectoring into a look-up table, and
hopping to an address where the matching image is simply retrieved like a
jpg file.  The quote retrieval is a relatively massively parallel
operation.

You may already understand all of this, but it was not obvious from your
below post.  Some parts of your post seemed to reflect the correct
understanding, others didn't, at least from my quick read.

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 3:26 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?



Matt,

First of all, we are, I take it, discussing how the brain or a computer can 
recognize an individual face from a video -  obviously the brain cannot 
match a face to a selection of a  billion other faces.

Hawkins' answer to your point that the brain runs masses of neurons in 
parallel in order to accomplish facial recognition is:

if I have many millions of neurons working together, isn't that like a 
parallel computer? Not really. Brains operate in parallel  parallel 
computers operate in parallel, but that's the only thing they have in 
common..

His basic point, as I understand, is that no matter how many levels of brain

are working on this problem of facial recognition, they are each still only 
going to be able to perform about ONE HUNDRED steps each in that half 
second.  Let's assume there are levels for recognising the invariant 
identity of this face, different features, colours, shape, motion  etc - 
each of those levels is still going to have to reach its conclusions 
EXTREMELY rapidly in a very few steps.

And all this, as I said, I would have thought all you guys should be able to

calculate within a very rough ballpark figure. Neurons only transmit signals

at relatively slow speeds, right? Roughly five million times slower than 
computers. There must be a definite limit to how many neurons can be 
activated and how many operations they can perform to deal with a facial 
recognition problem, from the time the light hits the retina to a half 
second later? This is the sort of thing you all love to calculate and is 
really important - but where are you when one really needs you?

Hawkins' point as to how the brain can decide in a hundred steps what takes 
a computer a million or billion steps (usually without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

I was v. crudely arguing something like this in a discussion with Richard 
about massive parallel computation.  If 

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner



Matt,

First of all, we are, I take it, discussing how the brain or a computer can 
recognize an individual face from a video -  obviously the brain cannot 
match a face to a selection of a  billion other faces.


Hawkins' answer to your point that the brain runs masses of neurons in 
parallel in order to accomplish facial recognition is:


if I have many millions of neurons working together, isn't that like a 
parallel computer? Not really. Brains operate in parallel  parallel 
computers operate in parallel, but that's the only thing they have in 
common..


His basic point, as I understand, is that no matter how many levels of brain 
are working on this problem of facial recognition, they are each still only 
going to be able to perform about ONE HUNDRED steps each in that half 
second.  Let's assume there are levels for recognising the invariant 
identity of this face, different features, colours, shape, motion  etc - 
each of those levels is still going to have to reach its conclusions 
EXTREMELY rapidly in a very few steps.


And all this, as I said, I would have thought all you guys should be able to 
calculate within a very rough ballpark figure. Neurons only transmit signals 
at relatively slow speeds, right? Roughly five million times slower than 
computers. There must be a definite limit to how many neurons can be 
activated and how many operations they can perform to deal with a facial 
recognition problem, from the time the light hits the retina to a half 
second later? This is the sort of thing you all love to calculate and is 
really important - but where are you when one really needs you?


Hawkins' point as to how the brain can decide in a hundred steps what takes 
a computer a million or billion steps (usually without much success) is:


The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]


I was v. crudely arguing something like this in a discussion with Richard 
about massive parallel computation.  If Hawkins is  right, and I think he's 
at least warm, you guys have surely got it all wrong.  (although you might 
still argue like Ben  that you can it do your way not the brain's - but 
hell, the difference in efficiency is so vast it surely ought to break your 
engineering heart).



Matt/ MT:
Thanks. And I repeat my question elsewhere : you don't think that the 
human

brain which does this in say half a second, (right?), is using massive
computation to recognize that face?


So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and 
this

will all run on your PC?  Let me know when your program is finished so I can
try it out.


You guys with all your mathematical calculations re the brain's total
neurons and speed of processing surely should be able to put ball-park
figures on the maximum amount of processing that the brain can do here.

Hawkins argues:

neurons are slow, so in that half a second, the information entering your
brain can only traverse a chain ONE HUNDRED neurons long. ..the brain
'computes' solutions to problems like this in one hundred steps or fewer,
regardless of how many total neurons might be involved. From the moment
light enters your eye to the time you [recognize the image], a chain no
longer than one hundred neurons could be involved. A digital computer
attempting to solve the same problem would take BILLIONS of steps. One
hundred computer instructions are barely enough to move a single character
on the computer's display, let alone do something interesting.


Which is why the human brain is so bad at arithmetic and other tasks that
require long chains of sequential steps.  But somehow it can match a face to 
a
name in 0.5 seconds.  Neurons run in PARALLEL.  Your PC does not.  Your 
brain

performs 10^11 weighted sums of 10^15 values in 0.1 seconds.  Your PC will
not.




IOW, if that's true, the massive computational approach is surely
RIDICULOUS - a grotesque travesty of engineering principles of economy, 
no?

Like using an entire superindustry of people to make a single nut? And, of
course, it still doesn't work. Because you just don't understand how
perception works in the first place.

Oh right... so let's make our computational capabilities even more 
massive,

right?  Really, really massive. No, no, even bigger than that?


  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 

 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - 
 in

 fact, I would argue, essential - 

Re: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Vladimir Nesov
On Dec 7, 2007 7:05 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 You are asking good questions about the mechanisms, which I am trying to
 explore emprically.  No good answers to this yet, although I have many
 candidate solutions, some of which (I think) look like your above model.

 I certainly agree with the sentiment that not *all* of the process can
 be as fluid as the higher level parts (if that is what you are meaning).

Or maybe they even shouldn't and can't be too fluid: it would be a
challenge to precisely implement procedures otherwise (like playing
piano).

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73783598-bfca32


Re[2]: [agi] How to represent things problem

2007-12-07 Thread Dennis Gorelik
Richard,

 the instance nodes are such an
 important mechanism that everything depends on the details of how they
 are handled.

Correct.


 So, to consider one or two of the details that you mention.  You would
 like there to be only a one-way connection between the generic node (do
 you call this the pattern node?)

1) All nodes are equal.

2) Nodes can point to each other.
Yes, connection should be one way.
(E.g.: You know George Bush, but he doesn't know you :-))

Two-way connection can be easily implemented by two separate
connections.


 For instance, we are able to see a field of patterns, of
 different colors, and then when someone says the phrase the green 
 patterns we find that the set of green patterns jumps out at us from
 the scene.  It is as if we did indeed have links from the generic 
 concept [green pattern] to all the instances.

Yes, that's good way to store links:
All relevant nodes are connected.


 Another question: what do we do in a situation where we see a field of
 grass, and think about the concept [grass blade]?

Field or grass concept and grass blade concept are obviously
directly connected.
This link was formed, because we saw Field of grass and Grass
blade together many times.


 Are there individual instances for each grass blade?

If you remember individual instances -- then yes.

 Are all of these linked to the generic concept of [grass blade]?

Some grass blades may be directly connected to field of grass.
Other may be connected only through other grass blade instances.
It would depend on if it's useful for brain to keep these direct
associations.


 What is different is that I see many, many possible ways to get these
 new-node creation mechanisms to work (and ditto for other mechanisms
 like the instance nodes, etc.) and I feel it is extremely problematic to
 focus on just one mechanism and say that THIS is the one I will 
 implement because  I think it feels like a good idea.


 The reason I think this is a problem is that these mechanisms have 
 system-wide consequences (i.e. they give rise to global behaviors) that
 are not necessarily obvious from the definition of the mechanism, so we
 need to build a simulation to find out what those mechanisms *really* do
 when they are put together and allowed to interact.

I agree -- testing is important.
In fact, it's extremely important.

Not only we need to test several models [of creating  updating nodes
and links), but within single model we should try several settings
values (such as if node1 and node2 were activated together -- how
much should we increase the strength of the link between them).


That's why it's important to carefully design tests.
Such tests should work reasonably fast and be able to indicate how
good did system work.

What is good and what is not good -- has to be carefully defined.
Not trivial, but quite doable task.


 I can show you a paper of mine in which I describe my framework in a
 little more detail.

Isn't this pager public yet?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73487197-bbb1fa


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:

 
 
  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 
 
 Could you give an example or two of the kind of problems that your AGI 
 system(s) will need such massive capabilities to solve? It's so good - in 
 fact, I would argue, essential - to ground these discussions. 

For example, I ask the computer who is this? and attach a video clip from my
security camera.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73639920-0e69de


RE: [agi] Evidence complexity can be controlled by guiding hands

2007-12-07 Thread Ed Porter
Vlad,

Agreed.  Copycat is a lot more wild and crazy at the low level than my
system would be.  But my system might operate more like it at a higher more
deliberative level.  For example, this might be the case if I were trying to
attack a difficult planning problem, such as how to write an answer to a
complex question in a most convincing manner.  (Of course there I would have
the words on my computer screen to help me keep track of a significant part
of the problem space.)

But the fact that Copycat's craziness can be relatively effectively
harnessed to do what it is supposed to is an encouraging sign that the
potential pitfalls of complexity can be significantly avoided 

I say significantly because Richard has a point, once a system gets really
complex, it get increasingly more difficult to feel you truly understand it,
and thus that you can truly trust it.  Of course that goes for people too.
Every so often one of them goes postal. 

Ed Porter

-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 3:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Evidence complexity can be controlled by guiding hands

On Dec 7, 2007 10:54 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 So, as I understand you, you are basically agreeing with me.  Is this
 correct?

 Ed Porter

I agree that high-level control allows more chaos at lower level, but
I don't think that copycat-level stochastic search is necessary or
even desirable.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73828093-9a54e5

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner
RE: [agi] Do we need massive computational capabilities?ED PORTER # When 
you say It only takes a few steps to retrieve something from memory. I hope 
you realize that depending how you count steps, it actually probably takes 
hundreds of millions of steps or more.  It is just that millions of them are 
performed in parallel, such that the longest sequence of any one causal path 
among such steps is no longer than 100 steps.  That is a very, repeat very, 
different thing that suggesting that only 100 separate actions were taken.  

Ed,

Yes, I understood that (though sure, I'm capable of misunderstanding anything 
here!) But let's try and make it simple and as concrete as possible - another 
way of putting Hawkins' point, as I understand,  is that at any given level, if 
the brain is recognising a given feature of the face, it can only compare it 
with very few comparable features in that half second with its 100 operations  
- whereas a computer will compare that same feature with vast numbers of others.

And actually ditto, for that useful Hofstadter example you quoted, of 
proceeding from aabc: aabd  to jjkl: ???  (although this is a somewhat more 
complex operation which may take a couple of seconds for the brain),  again a 
typical intelligent brain will almost certainly consider v. few options, 
compared with the vast numbers of options considered by that computer.

Ditto, for godsake,  a human chessplayer like Kasparov's brain considers an 
infinitesimal percentage of the moves considered by Big Blue in any given 
period - and yet can still win (occasionally) because of course it's working on 
radically different principles.

Hawkins' basic point that the brain isn't a computer at all -  which I think 
can be read less controversially as is a machine that works on very 
fundamentally different principles to those of currently programmed computers - 
especially when perceiving objects -  holds.

You're not dealing with that basic point, and I find it incredibly difficult to 
get anyone here squarely to face it. People retreat into numbers and millions.

Clearly the brain works VASTLY differently and more efficiently than current 
computers - are you seriously disputing that?

P.S. You also don't answer my question re: how many neurons  in total *can* be 
activated within a half second, or given period, to work on a given problem - 
given their relative slowness of communication? Is it indeed possible for 
hundreds of millions of messages about that one subject to be passed among 
millions of neurons in that short space (dunno-just asking)? Or did you pluck 
that figure out of the air?

P.P. S. A recent book by Read Montague on neuroeconomics makes much the same 
point from a v. different angle - highlighting that computers have a vastly 
wasteful search heritage which he argues has its roots in Turing and  Bletchley 
Park's attempts to decode Engima.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73842990-515452

Re: [agi] AGI communities and support

2007-12-07 Thread Mike Tintner


Bob : AGI related activities everywhere are minimal right now.  Even people

interested in AI often have no idea what the term AGI means.  The
meme hasn't spread very far beyond a few technologists and
visionaries.  I think it's only when someone has some amount of
demonstrable success with an AGI system that things will really begin
to move.


Bob,

Just to confirm - after Richard mentioned hearing AGI on the radio. I was
following up a  brain machine interface story, on the New Scientist website. 
At the bottom it offered me:


Robots - Learn more about the robotics revolution in our continually
updated special report  - which led to a mass of robotics stories.

So I searched the site for Artificial General Intelligence and AGI. 
Nothing.


The robotics revolution is already happening. Presumably, as some kind of 
roboticist, you would agree?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73864470-0e335b


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Richard:For my own system (and for Hofstadter too), the natural 
extension of the

system to a full AGI design would involve

a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you
would think that this is a difficult thing to do?

Richard,

Fine. Sounds interesting. But you don't actually clarify or explain 
anything. Why don't you explain how you or anyone else can fundamentally 
change your approach/rules at any point of solving a problem?


Why don't you, just  in plain English, - in philosophical as opposed to 
programming form  - set out the key rules or principles that allow you 
or anyone else to do this? I have never seen such key rules or 
principles anywhere, nor indeed even adumbrated anywhere. (Fancy word, 
but it just came to mind). And since they are surely a central problem 
for AGI - and no one has solved AGI - how on earth could I not think 
this a difficult matter?


I have some v. rough ideas about this, which I can gladly set out.  But 
I'd like to hear yours -   you should be able to do it briefly. But 
please, no handwaving.


I will try to think about your question when I can but meanwhile think 
about this:  if we go back to the analogy of painting and whether or not 
it can be used to depict things that are abstract or 
non-representational, how would you respond to someone who wanted exact 
details of how come painting could allow that to be possible.?


If someone asked that, I couldn't think of anything to say except ... 
why *wouldn't* it be possible?  It would strike me as just not a 
question that made any sense, to ask for the exact reasons why it is 
possible to paint things that are not representational.


I simply cannot understand why anyone would think it not possible to do 
that.  It is possible:  it is not easy to do it right, but that's not 
the point.  Computers can be used to program systems of any sort 
(including deeply irrational things like Microsoft Office), so why would 
anyone think that AGI systems must exhibit only a certain sort of design?


This isn't handwaving, it is just genuine bafflement.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73903282-a471b6