On Dec 3, 2007 7:19 PM, Ed Porter [EMAIL PROTECTED] wrote:
Perhaps one aspect of the AGI-at-home project would be to develop a good
generalized architecture for wedding various classes of narrow AI and AGI in
such a learning environment.
Yes, I think this is the key aspect, the meta-problem
I actually thought that that was one of the more positive pieces I've found.
Listeners may come out with a bad (mis-)impression, but NPR did nothing to
abet that.
Joshua
2007/12/3, Bob Mottram [EMAIL PROTECTED]:
Perhaps a good word of warning is that it will be really easy to
On Dec 3, 2007 11:03 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
On Monday 03 December 2007, Mike Dougherty wrote:
Another method of doing search agents, in the mean time, might be to
take neural tissue samples (or simple scanning of the brain) and try to
simulate a patch of neurons via
Bryan, The name grub sounds familiar. That is probably it. Ed
-Original Message-
From: Bryan Bishop [mailto:[EMAIL PROTECTED]
Sent: Monday, December 03, 2007 10:47 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]
On Thursday 29
RICHARD LOOSEMORE= You have no idea of the context in which I made
that sweeping dismissal.
If you have enough experience of research in this area you will know
that it is filled with bandwagons, hype and publicity-seeking. Trivial
models are presented as if they are fabulous
John,
I am sure there is interesting stuff that can be done. It would be
interesting just to see what sort of an agi could be made on a PC.
I would be interested in you Ideas for how to make a powerful AGI without a
vast amount of interconnect. The major schemes I know about for reducting
Joshua Fox wrote:
I actually thought that that was one of the more positive pieces I've
found. Listeners may come out with a bad (mis-)impression, but NPR did
nothing to abet that.
Agreed.
It is just that the baseline is so low that I suppose we feel gratified
when they only miss the point
Ed Porter wrote:
RICHARD LOOSEMORE= You have no idea of the context in which I made
that sweeping dismissal.
If you have enough experience of research in this area you will know
that it is filled with bandwagons, hype and publicity-seeking. Trivial
models are presented as if they are
Josh,
A pen-pal - an AI/robotics guy - has been waxing enthusiastic about your
book. For him:
the basic idea in his book is to devise what is essentially the basic
computational unit - BCU [this is my term, btw] that can be extended
indefinitely horizontally [in modules], and vertically
Richard,
It is not clear how valuable your 25 years of hard won learning is if it
causes you to dismiss valuable scientific work that seems to have eclipsed
the importance of anything I or you have published as trivial exercises in
public relations without giving any reason whatsoever for the
From: Ed Porter [mailto:[EMAIL PROTECTED]
John,
I am sure there is interesting stuff that can be done. It would be
interesting just to see what sort of an agi could be made on a PC.
Yes it would be interesting to see what could be done on a small cluster of
modern server grade computers. I
John,
As you say the hardware is just going to get better and better. In five
years the PC's of most of the people on this list will probably have at
least 8 cores and 16 gig of ram.
But even with a current 32 bit PC with say 4G of Ram you should be able to
build an AGI that would be a
Mike,
Matt:: The whole point of using massive parallel computation is to do the
hard part of the problem.
The whole idea of massive parallel computation here, surely has to be wrong.
And yet none of you seem able to face this to my mind obvious truth.
Who do you mean under you in this
Richard,
1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).
Could you describe, what *real* grounding problem is?
It would be nice to consider an example.
Say, we are trying to build AGI for the purpose of running
Richard,
3) A way to represent things - and in particular, uncertainty - without
getting buried up to the eyeballs in (e.g.) temporal logics that nobody
believes in.
Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes (concepts), when every
From: Ed Porter [mailto:[EMAIL PROTECTED]
But even with a current 32 bit PC with say 4G of Ram you should be able
to
build an AGI that would be a meaningful proof of concept. Lets say 3G
is
for representation, at say 60 bytes per atom (less than my usual 100
bytes/atom because using
Dennis: 1) Grounding Problem (the *real* one, not the cheap substitute
that
everyone usually thinks of as the symbol grounding problem).
Say, we are trying to build AGI for the purpose of running intelligent
chat-bot.
What would be the grounding problem in this case?
Example:
Ken,
Wow. I was going to say, this is one of the most interesting posts I have
read on the AGI list in a while, until I realized it wasn't on the AGI list.
Too bad. I have copied this response and your original email (below) to the
AGI list to share the inspiration.
In the following I have
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.
I don't claim you need special hardware.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI:
Dennis:
MT:none of you seem able to face this to my mind obvious truth.
Who do you mean under you in this context?
Do you think that everyone here agrees with Matt on everyting?
Quite the opposite is true -- almost every AI researcher has his own
unique set of believes.
I'm delighted to be
--- Ed Porter [EMAIL PROTECTED] wrote:
Matt,
IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably
quoting below I discussed the bandwidth issues. I am assuming nodes
directly talk to each other, which is probably overly optimistic, but still
are limited by the fact
More generally, I don't perceive any readiness to recognize that the brain
has the answers to all the many unsolved problems of AGI -
Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under
Dennis Gorelik wrote:
Richard,
1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).
Could you describe, what *real* grounding problem is?
It would be nice to consider an example.
Say, we are trying to build AGI for
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an argument that I've seen
was given by Eric Baum in his book What Is
Thought?, and I note that Eric has
MATT MAHONEY= My design would use most of the Internet (10^9 P2P
nodes).
ED PORTER= That's ambitious. Easier said than done unless you have a
Google, Microsoft, or mass popular movement backing you.
ED PORTER= I mean, what would motivate the average American, or even
the average
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an argument that I've seen
Dennis Gorelik wrote:
Richard,
3) A way to represent things - and in particular, uncertainty - without
getting buried up to the eyeballs in (e.g.) temporal logics that nobody
believes in.
Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes
Ed Porter wrote:
Richard,
It is not clear how valuable your 25 years of hard won learning is if
it causes you to dismiss valuable scientific work that seems to have
eclipsed the importance of anything I or you have published as
trivial exercises in public relations without giving any reason
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the system itself to generate its own
understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
MECHANISM
Benjamin Goertzel wrote:
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an
--- Ed Porter [EMAIL PROTECTED] wrote:
MATT MAHONEY= My design would use most of the Internet (10^9 P2P
nodes).
ED PORTER= That's ambitious. Easier said than done unless you have a
Google, Microsoft, or mass popular movement backing you.
It would take some free software that people
Benjamin Goertzel wrote:
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the system itself to generate its own
understanding (or, at least, acquisition) of grammar IN THE
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
following coherent case:
The particular NL parser paper in question, Collins's Convolution Kernels
for Natural Language
(http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/Collins-kernels.pdf)
is actually saying something quite important that extends way beyond parsers
and is highly applicable to AGI in general.
It
RICHARD LOOSEMOORE There is a high prima facie *risk* that intelligence
involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect),
ED PORTER=
Matt,
Perhaps your are right.
But one problem is that big Google-like compuplexes in the next five to ten
years will be powerful enough to do AGI and they will be much more efficient
for AGI search because the physical closeness of their machines will make it
possible for them to perform the
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
As an example of a creative leap (that is speculative and may be wrong,
but is
certainly creative), check out my hypothesis of emergent social-
psychological
intelligence as related to mirror neurons and octonion algebras:
What makes anyone think OpenCog will be different? Is it more
understandable? Will there be long-term aficionados who write
books on how to build systems in OpenCog? Will the developers
have experience, or just adolescent enthusiasm? I'm watching
the experiment to find out.
Well, OpenCog
OK, understood...
On Dec 4, 2007 9:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the
39 matches
Mail list logo