Ben: To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.
Big mistake. Think what would have happened if Freud had omitted the 40-odd
examples of slips in The Psychopathology of Everyday
JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.)
mean by 'complexity' (as opposed to the common usage of complex meaning
difficult).
Well, I as an ignoramus, was wondering about this - so thankyou. And it
wasn't clear at all to me from Richard's paper what he meant.
ATM: http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI to think about.
It doesn't have - you
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.
Big mistake. Think what would have happened if Freud had
THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO
EXPLICITLY DEAL WITH 500K TUPLES
And I asked -- Do you believe that this is some sort of huge conceptual
breakthrough?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
On Dec 6, 2007 8:23 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
resistance to moving onto the second stage. You have enough psychoanalytical
understanding, I think, to realise that the unusual length of your reply to
me may
Jean-Paul,
Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.
What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper Complex Systems,
There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems. Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a
Mike Tintner wrote:
Richard: Now, interpreting that result is not easy,
Richard, I get the feeling you're getting understandably tired with all
your correspondence today. Interpreting *any* of the examples of *hard*
cog sci that you give is not easy. They're all useful, stimulating
stuff,
Ed Porter wrote:
Richard,
I quickly reviewed your paper, and you will be happy to note that I
had underlined and highlighted it so such skimming was more valuable that it
otherwise would have been.
With regard to COMPUTATIONAL IRREDUCIBILITY, I guess a lot depends
on
Mike Tintner wrote:
JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.)
mean by 'complexity' (as opposed to the common usage of complex meaning
difficult).
Well, I as an ignoramus, was wondering about this - so thankyou. And it
wasn't clear at all to me from Richard's
Mike Tintner wrote on Thu, 6 Dec 2007:
ATM:
http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI
Mark,
First you attack me for making a statement which you falsely claimed
indicated I did not understand the math in the Collins' article (and
potentially discreted everything I said on this list). Once it was show
that that attack was unfair, rather than apologizing sufficiently for the
unfair
Ed Porter wrote:
Jean-Paul,
Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.
What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper
Ben,
You below email is a much more concise statement of the basic point
I was trying to make
Ed Porter
-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 9:45 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be
Richard,
I read your core definitions of computationally irreducabile and
global-local disconnect and by themselves they really don't distinguish
very well between complicated and complex.
But I did assume from your paper and other writings you meant complex
although your core definitions are
Ed Porter wrote:
Richard,
I read your core definitions of computationally irreducabile and
global-local disconnect and by themselves they really don't distinguish
very well between complicated and complex.
That paper was not designed to be a complex systems for absolute
beginners paper, so
Richard,
You will be happy to note that I have copied the text of your reply to my
Valuable Clippings From AGI Mailing List file. Below are some comments.
RICHARD LOOSEMORE= I now understand that you have indeed heard of
complex systems before, but I must insist that in your summary above
Richard, The problem here is that I am not sure in what sense you are using
the
word rational. There are many usages. One of those usages is very
common in cog sci, and if I go with *that* usage your claim is completely
wrong: you can pick up an elementary cog psy textbook and find at least
Conclusion: there is a danger that the complexity that even Ben agrees
must be present in AGI systems will have a significant impact on our
efforts to build them. But the only response to this danger at the
moment is the bare statement made by people like Ben that I do not
think that the
Richard Loosemore writes: Okay, let me try this. Imagine that we got a
bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put it on
the web somewhere and link to it whenever the
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.
I do not believe
Benjamin Goertzel wrote:
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
Ed,
Get a grip. Try to write with complete words in complete sentences
(unless discreted means a combination of excreted and discredited -- which
works for me :-).
I'm not coming back for a second swing. I'm still pursuing the first
one. You just aren't oriented well enough to
Mark,
You claimed I made a particular false statement about the Collins paper.
(That by itself could have just been a misunderstanding or an honest
mistake.) But then you added an insult to that by implying I had probably
made the alleged error because I was incapable of understand the
--- Ed Porter [EMAIL PROTECTED] wrote:
I have a lot of respect for Google, but I don't like monopolies, whether it
is Microsoft or Google. I think it is vitally important that there be
several viable search competators.
I wish this wicki one luck. As I said, it sounds a lot like your
Matt,
Does a PC become more vulnerable to viruses, worms, Trojan horses, root
kits, and other web attacks if it becomes part of a P2P network? And if so
why and how much.
Ed Porter
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 3:01
With regard to your questions below, If you actually took the time to
read
my prior responses, I think you will see I have substantially answered
them.
No, Ed. I don't see that at all. All I see is you refusing to answer them
even when I repeatedly ask them. That's why I asked them again.
--- Ed Porter [EMAIL PROTECTED] wrote:
Matt,
Does a PC become more vulnerable to viruses, worms, Trojan horses, root
kits, and other web attacks if it becomes part of a P2P network? And if so
why and how much.
It does if the P2P software has vulnerabilities, just like any other server or
Mike Tintner wrote:
Richard, The problem here is that I am not sure in what sense you are
using the
word rational. There are many usages. One of those usages is very
common in cog sci, and if I go with *that* usage your claim is
completely wrong: you can pick up an elementary cog psy
Derek Zahn wrote:
Richard Loosemore writes:
Okay, let me try this.
Imagine that we got a bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put
it on the web somewhere and
Richard,
What is your specific complaint about the 'viability of the framework'?
Ed,
This line of data gathering is very interesting to me as well, though I found
quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of
Benjamin Goertzel wrote:
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the
Matt,
So if it is perceived as something that increases a machine's vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 4:06 PM
To:
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
Matt,
So if it is perceived as something that increases a machine's vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter
Why are you having this discussion on an AGI list?
Will Pearson
-
On Dec 7, 2007 1:20 AM, Ed Porter [EMAIL PROTECTED] wrote:
This is something I have been telling people for years. That you should be
able to extract a significant amount (but probably far from all) world
knowledge by scanning large corpora of text. I would love to see how well
it actually
James,
Do you have any description or examples of you results.
This is something I have been telling people for years. That you should be
able to extract a significant amount (but probably far from all) world
knowledge by scanning large corpora of text. I would love to see how well
it
Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational. My system is not rational in that sense at all.
Richard,
Out of interest, rather than pursuing the original argument:
1) Who are these programmers/
It was part of a discussion of using a P2P network with OpenCog to develop
distributed AGI's.
-Original Message-
From: William Pearson [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: Distributed search (was RE: Hacker intelligence
--- William Pearson [EMAIL PROTECTED] wrote:
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
Matt,
So if it is perceived as something that increases a machine's
vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter
Why are you having
Are you saying the increase in vulnerability would be no more than that?
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 6:17 PM
To: agi@v2.listbox.com
Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
[agi]
--- Ed Porter [EMAIL PROTECTED] wrote:
Matt,
So if it is perceived as something that increases a machine's vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter
A web browser and email increases your computer's vulnerability, but it
doesn't
Hi Richard,
On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that interacted in a completely funky way, and we succeeded
in
Edward,
It's certainly a trick question, since if you don't define semantics
for this knowledge thing, it can turn out to be anything from simplest
do-nothings to full-blown physically-infeasible superintelligences. So
you assertion doesn't cut the viability of knowledge extraction for
various
Yes, it's what triggered my nitpicking reflex; I am sorry about that.
Your comment sounds fine when related to viability of teaching an AGI
in a text-only mode without too much manual assistance, but semantics
of what it was given to is quite different.
On Dec 7, 2007 3:13 AM, Ed Porter [EMAIL
Vlad,
My response was to the following message
==
Ed,
This line of data gathering is very interesting to me as well, though I
found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational
James Ratcliff wrote:
Richard,
What is your specific complaint about the 'viability of the framework'?
I was referring mainly to my complex systems problem (currently being
hashed to death on a parallel thread, and many times before).
Richard Loosemore
Ed,
This line of data
Mike Tintner wrote:
Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational. My system is not rational in that sense at all.
Richard,
Out of interest, rather than pursuing the original argument:
1) Who are
Scott Brown wrote:
Hi Richard,
On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that
Well, I'm not sure if not doing logic necessarily means a system is
irrational, i.e if rationality equates to logic. Any system consistently
followed can classify as rational. If for example, a program consistently
does Freudian free association and produces nothing but a chain of
--- Ed Porter [EMAIL PROTECTED] wrote:
Are you saying the increase in vulnerability would be no more than that?
Yes, at least short term if we are careful with the design. But then again,
you can't predict what AGI will do, or else it wouldn't be intelligent. I
can't say for certain long
Interesting - after drafting three replies I have come to realize that it is
possible to hold two contradictory views and live or even run with it. Looking
at their writings, both Ben Richard know damn well what complexity means and
entails for AGI.
Intuitively, I side with Richard's stance
Richard,
It's Neural Network -- set of nodes (concepts), when every node can be
connected with the set of other nodes. Every connection has it's own
weight.
Some nodes are connected with external devices.
For example, one node can be connected with one word in text
dictionary (that is an
53 matches
Mail list logo