*I just want to jump in here and say I appreciate the content of this post as
opposed to many of the posts of late which were just name calling and
bickering... hope to see more content instead.*
Richard Loosemore [EMAIL PROTECTED] wrote: Ed Porter wrote:
Jean-Paul,
Although complexity is
However, part of the key to intelligence is **self-tuning**.
I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.
I agree with Ben here, isnt one of the core concepts of AGI the ability to
modify its
James Ratcliff wrote:
However, part of the key to intelligence is **self-tuning**.
I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.
I agree with Ben here, isnt one of the core concepts of AGI
irrationality - is used to describe thinking and actions which are, or appear
to be, less useful or logical than the other alternatives.
and rational would be the opposite of that.
This line of thinking is more concerned with the behaviour of the entities,
which requires Goal orienting and
Richard: Suppose, further, that the only AGI systems that really do work
are ones
in which the symbols never use truth values but use other stuff (for
which there is no interpretation) and that the thing we call a truth
value is actually the result of an operator that can be applied to a
James: Either of these systems described will have a Complexity Problem, any
AGI will because it is a very complex system.
System 1 I dont believe is strictly practical, as few Truth values can be
stored locally directly to the frame. More realistic is there may be a
temporary value such
Well, this wasn't quite what I was pointing to: there will always be a
need for parameter tuning. That goes without saying.
The point was that if an AGI developer were to commit to system 1, they
are never going to get to the (hypothetical) system 2 by anything as
trivial as parameter
James Ratcliff wrote:
What I dont see then, is anywhere where System 2 ( a neural net?) is
better than system 1, or where it avoids the complexity issues.
I was just giving an example of the degree of flexibility required - the
exact details of this example are not important.
My point was
Richard: If someone asked that, I couldn't think of anything to say except
...
why *wouldn't* it be possible? It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.
Jeez, Richard, of course,
Mike Tintner wrote:
Richard: If someone asked that, I couldn't think of anything to say
except ...
why *wouldn't* it be possible? It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.
Jeez,
Richard: in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation. This cloud of knowledge
: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]
Richard: in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying
.listbox.com
Subject: Re: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]
Richard: in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context
Mike Tintner wrote:
Well, I'm not sure if not doing logic necessarily means a system is
irrational, i.e if rationality equates to logic. Any system
consistently followed can classify as rational. If for example, a
program consistently does Freudian free association and produces nothing
but
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
To the extent it is not proprietary, could you please list some of the types
of parameters that have to be tuned, and the types, if any, of
Loosemore-type complexity problems you envision in Novamente or have
experienced with
Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that
could possibly exist in computers are rational, and I am wondering what
else is there that irrational could possibly mean. I have named many
Jean-Paul Van Belle wrote:
Interesting - after drafting three replies I have come to realize
that it is possible to hold two contradictory views and live or even
run with it. Looking at their writings, both Ben Richard know damn
well what complexity means and entails for AGI. Intuitively, I
Mike Tintner wrote:
Richard: Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that
could possibly exist in computers are rational, and I am wondering
what else is there that irrational could possibly mean.
Richard: Mike,
I think you are going to have to be specific about what you mean by
irrational because you mostly just say that all the processes that could
possibly exist in computers are rational, and I am wondering what else is
there that irrational could possibly mean. I have named many
Mike Tintner wrote:
Richard:For my own system (and for Hofstadter too), the natural
extension of the
system to a full AGI design would involve
a system [that] can change its approach and rules of reasoning at
literally any step of problem-solving it will be capable of
producing all the
Ben: To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.
Big mistake. Think what would have happened if Freud had omitted the 40-odd
examples of slips in The Psychopathology of Everyday
JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.)
mean by 'complexity' (as opposed to the common usage of complex meaning
difficult).
Well, I as an ignoramus, was wondering about this - so thankyou. And it
wasn't clear at all to me from Richard's paper what he meant.
ATM: http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI to think about.
It doesn't have - you
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.
Big mistake. Think what would have happened if Freud had
On Dec 6, 2007 8:23 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
resistance to moving onto the second stage. You have enough psychoanalytical
understanding, I think, to realise that the unusual length of your reply to
me may
PROTECTED]
Sent: Thursday, December 06, 2007 1:34 AM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...
Hi Ed
You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by
'complexity' (as opposed to the common usage of complex meaning difficult
be a good first
indication of how easy or hard agi systems will be to control.
Ed Porter
-Original Message-
From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 1:34 AM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...
Hi Ed
Mike Tintner wrote:
Richard: Now, interpreting that result is not easy,
Richard, I get the feeling you're getting understandably tired with all
your correspondence today. Interpreting *any* of the examples of *hard*
cog sci that you give is not easy. They're all useful, stimulating
stuff,
-- extremely very large space of
possible global-local disconnects.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 05, 2007 10:41 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...
Ed Porter wrote:
RICHARD
Mike Tintner wrote:
JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.)
mean by 'complexity' (as opposed to the common usage of complex meaning
difficult).
Well, I as an ignoramus, was wondering about this - so thankyou. And it
wasn't clear at all to me from Richard's
Mike Tintner wrote on Thu, 6 Dec 2007:
ATM:
http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI
Ed Porter wrote:
Jean-Paul,
Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.
What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper
be a good
first
indication of how easy or hard agi systems will be to control.
Ed Porter
-Original Message-
From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 1:34 AM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...
Hi
are not very clear about the distinction.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 10:31 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...
Ed Porter wrote:
Richard,
I quickly reviewed
Ed Porter wrote:
Richard,
I read your core definitions of computationally irreducabile and
global-local disconnect and by themselves they really don't distinguish
very well between complicated and complex.
That paper was not designed to be a complex systems for absolute
beginners paper, so
@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...
Ed Porter wrote:
Jean-Paul,
Although complexity is one of the areas associated with AI where I have
less
knowledge than many on the list, I was aware of the general distinction
you
are making.
What I was pointing out in my
@v2.listbox.com
Sent: Thursday, December 06, 2007 3:19 PM
Subject: Re: [agi] None of you seem to be able ...
Mike Tintner wrote:
Richard: Now, interpreting that result is not easy,
Richard, I get the feeling you're getting understandably tired with all
your correspondence today. Interpreting
Conclusion: there is a danger that the complexity that even Ben agrees
must be present in AGI systems will have a significant impact on our
efforts to build them. But the only response to this danger at the
moment is the bare statement made by people like Ben that I do not
think that the
Richard Loosemore writes: Okay, let me try this. Imagine that we got a
bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put it on
the web somewhere and link to it whenever the
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.
I do not believe
Benjamin Goertzel wrote:
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
Mike Tintner wrote:
Richard, The problem here is that I am not sure in what sense you are
using the
word rational. There are many usages. One of those usages is very
common in cog sci, and if I go with *that* usage your claim is
completely wrong: you can pick up an elementary cog psy
Derek Zahn wrote:
Richard Loosemore writes:
Okay, let me try this.
Imagine that we got a bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put
it on the web somewhere and
Benjamin Goertzel wrote:
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the
Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational. My system is not rational in that sense at all.
Richard,
Out of interest, rather than pursuing the original argument:
1) Who are these programmers/
Hi Richard,
On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that interacted in a completely funky way, and we succeeded
in
Mike Tintner wrote:
Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational. My system is not rational in that sense at all.
Richard,
Out of interest, rather than pursuing the original argument:
1) Who are
Scott Brown wrote:
Hi Richard,
On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that
Well, I'm not sure if not doing logic necessarily means a system is
irrational, i.e if rationality equates to logic. Any system consistently
followed can classify as rational. If for example, a program consistently
does Freudian free association and produces nothing but a chain of
Interesting - after drafting three replies I have come to realize that it is
possible to hold two contradictory views and live or even run with it. Looking
at their writings, both Ben Richard know damn well what complexity means and
entails for AGI.
Intuitively, I side with Richard's stance
Ben: Obviously the brain contains answers to many of the unsolved problems
of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under recursive self-improvement). However, current neuroscience does
NOT contain these answers. And neither you nor anyone else has ever
Ed Porter wrote:
RICHARD LOOSEMOORE There is a high prima facie *risk* that intelligence
involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect),
Mike Tintner wrote:
Richard: science does too know a good deal about brain
architecture!I *know* cognitive science. Cognitive science is a friend
of mine.
Mike, you are no cognitive scientist :-).
Thanks, Richard, for keeping it friendly - but - are you saying cog
sci knows the:
Richard: science does too know a good deal about brain
architecture!I *know* cognitive science. Cognitive science is a friend of
mine.
Mike, you are no cognitive scientist :-).
Thanks, Richard, for keeping it friendly - but - are you saying cog sci
knows the:
*'engram' - how info
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...
Ed Porter wrote:
RICHARD LOOSEMOORE There is a high prima facie *risk* that
intelligence
involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would
Richard: Now, interpreting that result is not easy,
Richard, I get the feeling you're getting understandably tired with all your
correspondence today. Interpreting *any* of the examples of *hard* cog sci
that you give is not easy. They're all useful, stimulating stuff, but they
don't add up
Mike Tintner wrote:
Ben: Obviously the brain contains answers to many of the unsolved
problems of
AGI (not all -- e.g. not the problem of how to create a stable goal
system
under recursive self-improvement). However, current neuroscience does
NOT contain these answers. And neither you nor
Tintner wrote:
Your paper represents almost a literal application of the idea that
creativity is ingenious/lateral. Hey it's no trick to be just
ingenious/lateral or fantastic.
Ah ... before creativity was what was lacking. But now you're shifting
arguments and it's something else that is
Mike,
Matt:: The whole point of using massive parallel computation is to do the
hard part of the problem.
The whole idea of massive parallel computation here, surely has to be wrong.
And yet none of you seem able to face this to my mind obvious truth.
Who do you mean under you in this
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.
I don't claim you need special hardware.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI:
Dennis:
MT:none of you seem able to face this to my mind obvious truth.
Who do you mean under you in this context?
Do you think that everyone here agrees with Matt on everyting?
Quite the opposite is true -- almost every AI researcher has his own
unique set of believes.
I'm delighted to be
More generally, I don't perceive any readiness to recognize that the brain
has the answers to all the many unsolved problems of AGI -
Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an argument that I've seen
was given by Eric Baum in his book What Is
Thought?, and I note that Eric has
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an argument that I've seen
Benjamin Goertzel wrote:
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
following coherent case:
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...
Benjamin Goertzel wrote:
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
As an example of a creative leap (that is speculative and may be wrong,
but is
certainly creative), check out my hypothesis of emergent social-
psychological
intelligence as related to mirror neurons and octonion algebras:
68 matches
Mail list logo