Richard,
Did you know, for example, that certain kinds of brain damage can leave
a person with the ability to name a visually presented object, but then
be unable to pick the object up and move it through space in a way that
is consistent with the object's normal use . and that another
Mike Tintner wrote:
Well, I'm not sure if not doing logic necessarily means a system is
irrational, i.e if rationality equates to logic. Any system
consistently followed can classify as rational. If for example, a
program consistently does Freudian free association and produces nothing
but
Sounds like the worst case scenario: computations that need between say 20 and
100 PCs. Too big to run on a very souped up server (4-way Quad processor with
128GB RAM) but to scale up to a 100 Beowulf PC cluster typically means a factor
10 slow-down due to communications (unless it's a
Your bot is having a conversation - in words. Words are in fact continually
made sense of - grounded - by the human brain - converted into sensory
images - and have to be.
I've given simple examples of snatches of conversation, which are in fact
obviously thus grounded and have to be.
The
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good -
If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much. Provided that I had the right knowledge I
think I could produce a proof of concept type AGI on a single PC
today, even if it ran
I have a doubt about role of stochastic variance in this parallel
terraced scan as it proceeds in humans (or could proceed with the same
functional behavior in AIs). Could it be that low-level mechanisms are
not that stochastic and just compute a 'closure' of given context?
Closure brings up a
Dennis Gorelik wrote:
Richard,
Did you know, for example, that certain kinds of brain damage can leave
a person with the ability to name a visually presented object, but then
be unable to pick the object up and move it through space in a way that
is consistent with the object's normal use
Vladimir Nesov wrote:
I have a doubt about role of stochastic variance in this parallel
terraced scan as it proceeds in humans (or could proceed with the same
functional behavior in AIs). Could it be that low-level mechanisms are
not that stochastic and just compute a 'closure' of given context?
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote:
If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much. Provided that I had the right knowledge I
think I could produce a
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
To the extent it is not proprietary, could you please list some of the types
of parameters that have to be tuned, and the types, if any, of
Loosemore-type complexity problems you envision in Novamente or have
experienced with
Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that
could possibly exist in computers are rational, and I am wondering what
else is there that irrational could possibly mean. I have named many
Jean-Paul Van Belle wrote:
Interesting - after drafting three replies I have come to realize
that it is possible to hold two contradictory views and live or even
run with it. Looking at their writings, both Ben Richard know damn
well what complexity means and entails for AGI. Intuitively, I
Dennis Gorelik wrote:
Richard,
It seems that under Real Grounding Problem you mean Communication
Problem.
Basically your goal is to make sure that when two systems communicate
with each other -- they understand each other correctly.
Right?
If that's the problem -- I'm ready to give you my
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I would argue, essential - to ground these
Ed Porter wrote:
RICHARD LOOSEMORE= At the cognitive level, on the other hand, there is
a strong possibility that what happens when the mind builds a model of
some situation, it gets a large nummber of concepts to come together and
try to relax into a stable representation, and that
Thanks. And I repeat my question elsewhere : you don't think that the human
brain which does this in say half a second, (right?), is using massive
computation to recognize that face?
You guys with all your mathematical calculations re the brain's total
neurons and speed of processing surely
Mike Tintner wrote:
Richard: Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that
could possibly exist in computers are rational, and I am wondering
what else is there that irrational could possibly mean.
Richard: Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that could
possibly exist in computers are rational, and I am wondering what else is
there that irrational could possibly mean. I have named many
--- Mike Tintner [EMAIL PROTECTED] wrote:
Thanks. And I repeat my question elsewhere : you don't think that the human
brain which does this in say half a second, (right?), is using massive
computation to recognize that face?
So if I give you a video clip then you can match the person in
Richard, With regard to your below post:
RICHARD LOOSEMORE ###Allowing the system to adapt to the world by
giving it flexible mechanisms that *build* mechanisms (which it then uses),
is one way to get the system to do some of the work of fitting parameters
(as ben would label it), or reducing
Hi Matt, Wonderful idea, now it will even show the typical human trait of
lying...when i ask it do you still love me? most answers in its database will
have Yes as an answer but when i ask it 'what's my name?' it'll call me John?
However, your approach is actually already being implemented to
On Dec 7, 2007 7:41 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
No, my proposal requires lots of regular PCs with regular network
connections.
Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is -
Hippocampus damage and resulting learning deficiencies are very
interesting phenomena. They probably show how important high-level
control of learning is in efficient memorization, particularly in
memorization of regularities that are presented only few times (or
just once, as in the case of
Derek,
Low level design is not critical for AGI. Instead we observe high level brain
patterns and try to implement them on top of our own, more understandable,
low level design.
I am curious what you mean by high level brain patterns
though. Could you give an example?
1) All
Richard,
Let's save both of us time and wait when somebody else read this
Cognitive Science book and will come here to discuss it.
:-)
Though interesting, interpreting Brain damage experiments is not the
most important thing for AGI development.
In both cases vision module works good.
Richard,
This could be called a communcation problem, but it is internal, and in
the AGI case it is not so simple as just miscalculated numbers.
Communication between subsystems is still communication.
So I suggest to call it Communication problem.
So here is a revised version of the
On Dec 8, 2007 2:10 AM, Ed Porter [EMAIL PROTECTED] wrote:
Vlad,
The Russians have traditionally had more than their share of math whizzes,
so I am surprised there isn't more interest in this subject there.
I don't understand I wonder where your question has a positive answer and
how it can
Vlad,
The Russians have traditionally had more than their share of math whizzes,
so I am surprised there isn't more interest in this subject there.
I don't understand I wonder where your question has a positive answer and
how it can look like.
Perhaps you mean, you wonder where one would be
Mike Tintner # Yes, I understood that (though sure, I'm capable of
misunderstanding anything here!)
ED PORTER # Great, I am glad you understood this. Part of what you
said indicated you did. BTW, we are all capable of misunderstanding things.
Mike Tintner # Hawkins' basic point
AGI related activities everywhere are minimal right now. Even people
interested in AI often have no idea what the term AGI means. The
meme hasn't spread very far beyond a few technologists and
visionaries. I think it's only when someone has some amount of
demonstrable success with an AGI system
Dennis Gorelik writes: Derek, I quoted this Richard's article in my blog:
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
Cool. Now I'll quote your blogged response:
So, if low level brain design is incredibly complex - how do we copy it? The
answer is:
On Dec 7, 2007 10:54 PM, Ed Porter [EMAIL PROTECTED] wrote:
Vlad,
So, as I understand you, you are basically agreeing with me. Is this
correct?
Ed Porter
I agree that high-level control allows more chaos at lower level, but
I don't think that copycat-level stochastic search is necessary or
Vlad,
So, as I understand you, you are basically agreeing with me. Is this
correct?
Ed Porter
-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Friday, December 07, 2007 2:24 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Evidence complexity can be controlled by
Richard Loosemore writes: This becomes a problem because when we say of
another person that they meant something by their use of a particular word
(say cat), what we actually mean is that that person had a huge amount of
cognitive machinery connected to that word cat (reaching all the way
On Dec 7, 2007 7:42 PM, Ed Porter [EMAIL PROTECTED] wrote:
Yes, there would be a tremendous number of degrees of freedom, but there
would be a tremendous number of sources of guidance and review from the best
matching prior experiences of the past successes and failures of the most
similar
Bob,
I agree. I think we should be able to make PC based AGI's. With only about
50 million atoms they really wouldn't bea ble to have much world knowledge,
but they should be able to understand, say the world of a simple video game,
such as pong or PacMan.
As Richard Loosemore and I have just
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
Matt,
For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.
I don't claim you need special hardware.
But you claim that you need massive computational capabilities
Clearly the brain works VASTLY differently and more efficiently than current
computers - are you seriously disputing that?
It is very clear that in many respects the brain is much less efficient than
current digital computers and software.
It is more energy-efficient by and large, as Read
Matt,
No, my proposal requires lots of regular PCs with regular network connections.
Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is - AGI can successfully run on singe regular PC.
Special hardware
Here's the worst case scenario I see for ai: that there has to be
hardware complexity to the extent that generally nobody is going to be
able to get the initial push. Indeed, there's Moore's law to take
account of, but the economics might just prevent us from accumulating
enough nodes, enough
On Friday 07 December 2007, Mike Tintner wrote:
P.S. You also don't answer my question re: how many neurons in total
*can* be activated within a half second, or given period, to work on
a given problem - given their relative slowness of communication? Is
it indeed possible for hundreds of
The robotics revolution is already happening. Presumably, as some kind of
roboticist, you would agree?
The robotics revolution has already happened. There has been a quiet
revolution in some manufacturing industries with large amounts of
human labour being replaced by automation. However,
On Dec 8, 2007 1:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
Vlad,
What country are you in?
And what is the level of web-comunity, academic, commercial, and
governmental support AGI in your country?
Ed Porter
I live in Moscow. AGI-related activities are nonexistent here; there's
a small
Derek Zahn wrote:
Richard Loosemore writes:
This becomes a problem because when we say of another person that they
meant something by their use of a particular word (say cat), what we
actually mean is that that person had a huge amount of cognitive
machinery connected to that word cat
Matt,
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I would argue, essential - to ground these
Mike,
1. Bush walks like a cowboy, doesn't he?
The only way a human - or a machine - can make sense of sentence 1 is by
referring to a mental image/movie of Bush walking.
That's not the only way to make sense of the saying.
There are many other ways: chat with other people, or look on Google:
Mike,
MIKE TINTNER # Hawkins' point as to how the brain can decide in a
hundred steps what takes a computer a million or billion steps (usually
without much success) is:
The answer is the brain doesn't 'compute' the answers ; it retrieves the
answers from memory. In essence, the answers
Matt,
First of all, we are, I take it, discussing how the brain or a computer can
recognize an individual face from a video - obviously the brain cannot
match a face to a selection of a billion other faces.
Hawkins' answer to your point that the brain runs masses of neurons in
parallel
On Dec 7, 2007 7:05 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
You are asking good questions about the mechanisms, which I am trying to
explore emprically. No good answers to this yet, although I have many
candidate solutions, some of which (I think) look like your above model.
I
Richard,
the instance nodes are such an
important mechanism that everything depends on the details of how they
are handled.
Correct.
So, to consider one or two of the details that you mention. You would
like there to be only a one-way connection between the generic node (do
you call
--- Mike Tintner [EMAIL PROTECTED] wrote:
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I
Vlad,
Agreed. Copycat is a lot more wild and crazy at the low level than my
system would be. But my system might operate more like it at a higher more
deliberative level. For example, this might be the case if I were trying to
attack a difficult planning problem, such as how to write an answer
RE: [agi] Do we need massive computational capabilities?ED PORTER # When
you say It only takes a few steps to retrieve something from memory. I hope
you realize that depending how you count steps, it actually probably takes
hundreds of millions of steps or more. It is just that millions of
Bob : AGI related activities everywhere are minimal right now. Even people
interested in AI often have no idea what the term AGI means. The
meme hasn't spread very far beyond a few technologists and
visionaries. I think it's only when someone has some amount of
demonstrable success with an
Mike Tintner wrote:
Richard:For my own system (and for Hofstadter too), the natural
extension of the
system to a full AGI design would involve
a system [that] can change its approach and rules of reasoning at
literally any step of problem-solving it will be capable of
producing all the
56 matches
Mail list logo