Some might say that if they get conservation of mass
and newton's law then they skipped all the useless stuff!
OK, but those some probably don't include any preschool
teachers or educational theorists. That hypothesis is completely at odds
with my own intuition
from having raised 3
Ben: Right. My intuition is that we don't need to simulate the dynamics of
fluids, powders and the like in our virtual world to make it adequate for
teaching AGIs humanlike, human-level AGI. But this could be wrong.I suppose
it depends on what kids actually learn when making cakes, skipping
Oh, and because I am interested in the potential of high-fidelity physical
simulation as a basis for AI research, I did spend some time recently looking
into options. Unfortunately the results, from my perspective, were
disappointing.
The common open-source physics libraries like ODE,
Hi Ben.
OTOH, if one wants to go the virtual-robotics direction (as is my intuition),
then it is possible to bypass many of the lower-level perception/actuation
issues and focus on preschool-level learning, reasoning and conceptual
creation.
And yet, in your paper (which I enjoyed),
narrow AI I suppose, though it's kind of on the borderline. It
does seem like one of the ways to commercialize incremental progress toward AGI.
Derek Zahn
supermodelling.net
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
Pei Wang: --- I have problem with each of these assumptions and beliefs,
though I don't think anyone can convince someone who just get a big grant
that they are moving in a wrong direction. ;-)
With his other posts about the Singularity Summit and his invention of the word
Synaptronics, Modha
that considerably more
thought.
I look forward to being in the audience when you present the paper at AGI-09.
Derek Zahn
agiblog.net
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify
Oh, one other thing I forgot to mention. To reach my cheerful conclusion about
your paper, I have to be willing to accept your model of cognition. I'm pretty
easy on that premise-granting, by which I mean that I'm normally willing to
go along with architectural suggestions to see where they
Matthias Heger:
If chess is so easy because it is completely described, complete information
about
state available, fully deterministic etc. then the more important it is that
your AGI
can learn such an easy task before you try something more difficult.
Chess is not easy. Becoming
As somebody who considers consciousness, qualia, and so on to be poorly-defined
anthropomorphic mind-traps, I am not interested in any such discussions. Other
people are, and I have no problem ignoring them, like I ignore a number of
individual cranks and critics who post things of similarly
I bet if you tried very hard to move the group to the forum (for example, by
only posting there yourself and periodically urging people to use it), people
could be moved there. Right now, nobody posts there because nobody else posts
there; if one wants one's stuff to be read, one sends it to
How about this:
Those who *do* think it's worthwhile to move to the forum: Instead of posting
email responses to the mailing list, post them to the forum and then post a
link to the response to the email list, thus encouraging threads to continue in
the more advanced venue.
I shall do this
Oh, also:
When I try to register a form account, it says:Sorry, an error occurred. If you
are unsure on how to use a feature, or don't know why you got this error
message, try looking through the help files for more information.
The error returned was:
To register, please send your request to
I am reminded of this:
http://www.serve.com/bonzai/monty/classics/MissAnneElk
Date: Tue, 14 Oct 2008 17:14:39 -0400From: [EMAIL PROTECTED]: [EMAIL
PROTECTED]: Re: [agi] Advocacy Is no Excuse for Exaggeration
OK, but you have not yet explained what your theory of consciousness is, nor
what
It has been explained many times to Tintner that even though computer hardware
works with a particular set of primitive operations running in sequence, a
hardwired set of primitive logical operations operating in sequence is NOT the
theory of intelligence that any AGI researchers are proposing
By embodied I think people usually mean a dense sensory connection (with a
feedback loop) to the physical world. The feedback could be as simple as
aiming a camera. However, it seems to me that an AI program connected to
YouTube could maybe have a dense enough link to the real world to charge
Ben,
Thanks for the large amount of work that must have gone into the production of
the wikibook. Along with the upcoming PLN book (now scheduled for Sept 26
according to Amazon) and re-reading The Hidden Pattern, there should be enough
material for a diligent student to grok your approach.
Thanks again Richard for continuing to make your view on this topic clear to
those who are curious.
As somebody who has tried in good faith and with limited but nonzero success to
understand your argument, I have some comments. They are just observations
offered with no sarcasm or insult
Oh, one last point:
I find your thoughts in this message quite interesting personally because I
think that puzzling out exactly what concept builders need to do, and how
they might be built to do it, is the most interesting thing in the whole world.
I am resistant to the idea that it is
Sorry for three messages in short succession. Regarding concept builders, I
have been writing in my bumbling way about this (and will continue to muse on
fundamental issues) in my little blog:
http://agiblog.net
---
agi
Archives:
I agree that the hardware advances are inspirational, and it seems possible
that just having huge hardware around could change the way people think and
encourage new ideas.
But what I'm really looking forward to is somebody producing a very impressive
general intelligence result that was just
Richard,
If I can make a guess at where Jim is coming from:
Clearly, intelligent systems CAN be produced. Assuming we can define
intelligent system well enough to recognize it, we can generate systems at
random until one is found. That is impractical, however. So, we can look at
the
Brain modeling certainly does seem to be in the news lately. Checking out
nextbigfuture.com, I was reading about that petaflop computer Roadrunner and
articles about it say that they are or will soon be emulating the entire visual
cortex -- a billion neurons. I'm sure I'm not the only one
Dr. Matthias Heger:
Which animal has the smallest level of intelligence
which still would be sufficient for a robot to be an
AGI-robot?
You ask for opinions, we got lots of those!
I believe most people on this list would consider that humans are the only
animals with
TeslasTwo things I think are interesting about these trends in
high-performance commodity hardware:
1) The flops/bit ratio (processing power vs memory) is skyrocketing. The
move to parallel architectures makes the number of high-level operations per
transistor go up, but bits of memory per
Gary Miller writes:
We're thinking Don't feed the Trolls!
Yeah, typical trollish behavior -- upon failing to stir the pot with one
approcah, start adding blanket insults. I put Steve Richfield in my killfile a
week ago or so, but I went back to the archive to read the message in question.
Speaking of neurons and simplicity, I think it's interesting that some of the
how much cpu power needed to replicate brain function arguments use the basic
ANN model, assuming a MULADD per synapse, updating at say 100 times per second
(giving a total computing power of about 10^16 OPS). But
Mark Waser:
Does anybody have any interest in and/or willingness to program in a
different environment?
I haven't decided to what extent I'll participate in OpenCog myself yet. For
me, it depends more on whether the capabilities of the system seem worth
exploring, which in turn depends as
Steve Richfield:
It is sure nice that this is a VIRTUAL forum, for if we were all
in one room together, my posting above would probably get
me thrashed by the true AGI believers here.
Does anyone here want to throw a virtual stone?
Sure.
*plonk*
John Rose writes: So I feel that much of our brain mass is there due to the
natural richness of nature, and there may be quite a bit of overkill compared
to what would be needed in software AGI.
Are we satisfied building AGIs that cannot cope with the actual world because
it is too rich?
Vladimir Nesov: I think sterile texture of artificial environments hides
the richness of their structure from our intuition, since we already have it
imprinted by experience with the real world. Anything less than capable of
dealing with the real world won't understand cleaned up environments
For those who might not have seen it yet, seems this concept is becoming rather
popular:
http://www.msnbc.msn.com/id/24668099/
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Richard Loosemore writes:
some very useful text about the symbol grounding problem.
Thank you Richard. For once I don't feel like a complete idiot. I am familiar
with these Harnad papers and find them quite clear. Beyond that I understand
your further explanation and even agree
Richard Loosemore: So, for example, if I were organizing a conference on AGI I
would want people to address such questions as:
I find your list of questions to be quite fascinating, and I'd love to
participate in an active list or conference devoted to these Foundations of
Cognitive
I noticed yesterday that most of the videos of talks and panels from AGI-08
have been uploaded (http://www.agi-08.org/schedule.php). Big thanks to the
organizers for that!
I have some difficulty getting into some of the papers but the 10-ish minute
overview talks are by and large quite
One other observation I forgot to mention: Several people brought up the
desirability of some kind of benchmark problem area to help compare the methods
and effectiveness of various approaches. For a bunch of reasons I think it
will be difficult to define such things in a way that researchers
Bob Mottram writes:
I havn't watched all of the AGI-08 videos, but of those that I have seen the
15 minute format left me non the wiser. With limited time I would have
preferred longer talks with more depth but perhaps fewer in number,
especially on the more mathematical topics. Another
Richard Loosemore writes: Prompted by your enthusiastic write-up, I just
wasted one and a half hours scanning through all of the AGI-08 papers that I
downloaded previously. I have 28 of them; they did not include anything from
Stephen Reed, nor any NARS paper, so I guess my collection must
Richard Loosemore: I read Pei's paper and there was nothing horrifying about
it (please spare the sarcasm).
No sarcasm intended. If I had just come to the conclusion that 28 papers in a
row were a waste of time, I'd be horrified at the prospect of a 29th that would
also not give me what I
Richard Loosemore: My god, Mark: I had to listen to people having a general
discussion of grounding (the supposed them of that workshop) without a
single person showing the slightest sign that they had more than an amateur's
perspective on what that concept actually means.
I was not at
Bruno Frandemiche asked for online AGI-related text.
If you're adventurous, I'd recommend the Workshop proceedings from 2006:
http://www.agiri.org/wiki/Workshop_Proceedings
and the conference proceedings from AGI-08:
http://www.agi-08.org/papers
---
Thanks, what an interesting project. Purely on the mechanical side, it shows
how far away we are from truly flexible house-friendly robust mobile robotic
devices.
I'm a big fan of the robotic approach myself. I think it is quite likely that
dealing with the messy flood of dirty data coming
I assume you are referring to Mike Tintner.
As I described a while ago, I *plonk*ed him myself a long time ago, most mail
programs have the ability to do that. and it's a good idea to figure out how to
do it with your own email program.
He does have the ability to point at other thinkers and
The little Barsalou I have read so far has been quite interesting, and I think
there are a lot of good points there, even if it is a rather extreme position.
The issue of how concepts (which is likely a nice suitcase word lumping a lot
of discrete or at least overlapping cognitive functions
J Andrew Rogers writes: Most arguments and disagreements over complexity are
fundamentally about the strict definition of the term, or the complete
absence thereof. The arguments tend to evaporate if everyone is forced to
unambiguously define such terms, but where is the fun in that.
I agree
Richard: I get tripped up on your definition of complexity:
A system contains a certain amount of complexity in it if it
has some regularities in its overall behavior that are governed
by mechanisms that are so tangled that, for all practical purposes,
we must assume that we will never
Mark Waser: Huh? Why doesn't engineering discipline address building complex
devices?
Perhaps I'm wrong about that. Can you give me some examples where engineering
has produced complex devices (in the sense of complex that Richard means)?
---
agi
Me: Can you give me some examples where engineering
has produced complex devices (in the sense of complex
that Richard means)?
Mark: Computers. Anything that involves aerodynamics.
Richard, is this correct? Are human-engineered airplanes complex in the sense
you mean?
Mark Waser:
I don't know what is going to be more complex than a variable-geometry-wing
aircraft like a F-14 Tomcat. Literally nothing can predict it's aerodynamic
behavior.
The avionics are purely reactive because it's future behavior cannot be
predicted
to any certainty even at
Richard Loosemore: it makes no sense to ask is system X complex?. You can
only ask how much complexity, and what role it plays in the system.
Yes, I apologize for my sloppy language. When I say is system X complex?
what I mean is whether the RL-complexity of the system is important in
Ben Goertzel writes:
it might be valuable to have an integration of Player/Stage/Gazebo with
OpenSim
I think this type of project is a good start toward addressing one of the major
critiques of the virtual world approach -- the temptation to (unintentionally)
cheat -- those canned
One more bit of ranting on this topic, to try to clarify the sort of thing I'm
trying to understand.
Some dude is telling my AGI program: There's a piece called a 'knight'. It
moves by going two squares in one direction and then one in a perpendicular
direction. And here's something neat:
Stephen Reed writes:
Hey Texai, let's program
[Texai] I don't know how to program, can you teach me by yourself?
Sure, first thing is that a program consists of statements that each does
something
[Texai] I assume by program you mean a sequence of instructions that a
computer can interpret and
Vladimir Nesov writes: Generating concepts out of thin air is no big deal,
if only a resource-hungry process. You can create a dozen for each episode,
for example.
If I am not certain of the appropriate mechanism and circumstances for
generating one concept, it doesn't help to suggest that a
Richard Loosemore: I do not laugh at your misunderstanding, I laugh at the
general complacency; the attitude that a problem denied is a problem solved.
I laugh at the tragicomedic waste of effort.
I'm not sure I have ever seen anybody successfully rephrase your complexity
argument back at
Josh writes: You see, I happen to think that there *is* a consistent, general,
overall theory of the function of feedback throughout the architecture. And I
think that once it's understood and widely applied, a lot of the
architectures (repeat: a *lot* of the architectures) we have floating
Richard Loosemore:
I'll try to tidy this up and put it on the blog tomorrow.
I'd like to pursue the discussion and will do so in that venue after your post.
I do think it is a very interesting issue. Truthfully I'm more interested in
your specific program for how to succeed than this
William Pearson writes: Consider an AI learning chess, it is told in plain
english that...
I think the points you are striving for (assuming I understand what you mean)
are very important and interesting. Even the first simplest steps toward this
clear and (seemingly) simple task baffle me.
Steve Richfield writes:
Hmm, I haven't seen a reference to those core publications. Is there a
semi-official list?
This list is maintained by the Artificial General Intelligence Research
Instutute. See www.agiri.org . On that site there are several semi-official
lists -- under
Note that the Instead of an AGI Textbook section is hardly fleshed out at all
at this point, but it does link to a more-complete similar effort to be found
here:
http://nars.wang.googlepages.com/wang.AGI-Curriculum.html
---
agi
Archives:
Steve Richfield, writing about J Storrs Hall:
You sound like the sort that once the things is sort of
roughed out, likes to polish it up and make it as good as possible.
I don't believe your characterization is accurate. You could start with this
well-done book to check that opinion:
Jim Bromer writes: With God's help, I may have discovered a path toward a
method to achieve a polynomial time solution to Logical Satisfiability
If you want somebody to talk about the solution, you're
more likely to get helpful feedback elsewhere as it is not a
topic that most of us on this
[EMAIL PROTECTED] writes:
But it should be quite clear that such methods could eventually be very handy
for AGI.
I agree with your post 100%, this type of approach is the most interesting
AGI-related stuff to me.
An audiovisual perception layer generates semantic interpretation on the
Stephen Reed writes:
How could a symbolic engine ever reason about the real world *with* access
to such information?
I hope my work eventually demonstrates a solution to your satisfaction.
Me too!
In the meantime there is evidence from robotics, specifically driverless
cars,
Related obliquely to the discussion about pattern discovery algorithms What
is a symbol?
I am not sure that I am using the words in this post in exactly the same way
they are normally used by cognitive scientists; to the extent that causes
confusion, I'm sorry. I'd rather use words in
Mark Waser writes:
True enough, that is one answer: by hand-crafting the symbols and the
mechanics for instantiating them from subsymbolic structures. We of
course hope for better than this but perhaps generalizing these working
systems is a practical approach. Um. That is what is
Ben,
It seems to me that Novamente is widely considered the most promising and
advanced AGI effort around (at least of the ones one can get any detailed
technical information about), so I've been planning to put some significant
effort into understanding it with a view toward deciding whether
Ben Goertzel writes: The PLN book should be out by that date ... I'm currently
putting in some final edits to the manuscript... Also, in April and May
I'll be working on a lot of documentation regarding plans for OpenCog.
Thanks, I look forward to both of these.
Dennis Gorelik writes: Derek, I quoted this Richard's article in my blog:
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
Cool. Now I'll quote your blogged response:
So, if low level brain design is incredibly complex - how do we copy it? The
answer is:
Richard Loosemore writes: This becomes a problem because when we say of
another person that they meant something by their use of a particular word
(say cat), what we actually mean is that that person had a huge amount of
cognitive machinery connected to that word cat (reaching all the way
Richard Loosemore writes: Okay, let me try this. Imagine that we got a
bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put it on
the web somewhere and link to it whenever the
Hi Robin. In part it depends on what you mean by fast.
1. Fast - less than 10 years.
I do not believe there are any strong arguments for general-purpose AI being
developed in this timeframe. The argument here is not that it is likely, but
rather that it is *possible*. Some AI researchers,
Bryan Bishop: Looks like they were just simulating eight million neurons with
up to 6.3k synapses each. How's that necessarily a mouse simulation, anyway?
It isn't. Nobody said it was necessarily a mouse simulation. I said it was
a simulation of a mouse-brain-like structure. Unfortunately,
Edward,
For some reason, this list has become one of the most hostile and poisonous
discussion forums around. I admire your determined effort to hold substantive
conversations here, and hope you continue. Many of us have simply given up.
-
This list is sponsored by AGIRI:
A large number of individuals on this list are architecting an AGI solution
(or part of one) in their spare time. I think that most of those efforts do
not have meaningful answers to many of the questions, but rather intend to
address AGI questions from a particular perspective. Would such
1. What is the single biggest technical gap between current AI and AGI?
I think hardware is a limitation because it biases our thinking to focus on
simplistic models of intelligence. However, even if we had more computational
power at our disposal we do not yet know what to do with it, and
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen: 1) It acquires the knowledge
and skills to become a competent programmer, a task that takes a human many
years of directed training and practical experience. 2) It is
Tim Freeman: No value is added by introducing considerations about
self-reference into conversations about the consequences of AI engineering.
Junior geeks do find it impressive, though.
The point of that conversation was to illustrate that if people are worried
about Seed AI exploding, then
Linas Vepstas: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen:1) It acquires the
knowledge and skills to become a competent programmer, a task that takes a
human many years of directed training and practical experience. Wrong.
This
Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be
a matter of definition. But so far the several people who have gotten back to
me, including yourself, seem to take the position that that is not the type of
recursive self improvement they consider to be RSI. Some
I wrote:
If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the
extent that RSI is possible: the efficiency of the model implementation
and the capabilities of the model do not change.
An obvious objection to this
Richard Loosemore: a) the most likely sources of AI are corporate or
military labs, and not just US ones. No friendly AI here, but profit-making
and mission-performing AI. Main assumption built into this statement: that
it is possible to build an AI capable of doing anything except dribble
Richard Loosemore writes: You must remember that the complexity is not a
massive part of the system, just a small-but-indispensible part. I think
this sometimes causes confusion: did you think that I meant that the whole
thing would be so opaque that I could not understand *anything* about
Edward W. Porter writes: To Matt Mahoney.
Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI
(which I assume from context is a reference to Recursive Self Improvement) is
necessary for general intelligence.
So could you, or someone, please define exactly
it a lot.
Date: Mon, 1 Oct 2007 11:34:09 -0400 From: [EMAIL PROTECTED] To:
agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content
Derek Zahn wrote: Richard Loosemore writes: You must remember
that the complexity is not a massive part of the system, just a
small
I suppose I'd like to see the list management weigh in on whether this type of
talk belongs on this particular list or whether it is more appropriate for the
singularity list.
Assuming it's okay for now, especially if such talk has a technical focus:
One thing that could improve safety is to
Richard Loosemore writes: It is much less opaque. I have argued that this
is the ONLY way that I know of to ensure that AGI is done in a way that
allows safety/friendliness to be guaranteed. I will have more to say about
that tomorrow, when I hope to make an announcement.
Cool. I'm sure
Don Detrich writes:
AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us
Admittedly there are many possible dangers with future AGI technology. We can
think of a million horror stories and in all probability some of the problems
that will
Responding to Edward W. Porter:
Thanks for the excellent message!
I am perhaps too interested in seeing what the best response from the field of
AGI might be to intelligent critics, and probably think of too many
conversations in those terms; I did not mean to attack or criticise your
Ben Goertzel writes: http://www.nvidia.com/page/home.html Anyone know what
are the weaknesses of these GPU's as opposed to ordinary processors? They
are good at linear algebra and number crunching, obviously. Is there some
reason they would be bad at, say, MOSES learning?
These parallel
Moshe Looks writes: This is not quite correct; it really depends on the
complexity of the programs one is evolving and the structure of the fitness
function. For simple cases, it can really rock; see
http://www.cs.ucl.ac.uk/staff/W.Langdon/
That's interesting work, thanks for the link!
Robert Wensman writes:
Has there been any work done previously in statistical, example driven
deduction?
Yes. In this AGI community, Pei Wang's NARS system is exactly that:
http://nars.wang.googlepages.com/
Also, Ben Goertzel (et. al.) is building a system called Novamente
Robert Wensman writes:
Databases: 1. Facts: Contains sensory data records, and actuator records.
2. Theory: Contains memeplexes that tries to model the world.
I don't usually think of 'memes' as having a primary purpose of modeling the
world... it seems to me like the key to your whole
9. a particular AGI theoryThat is, one that convinces me it's on the right
track.
Now that you have run this poll, what did you learn from the responses and how
are you using this information in your effort?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
I think probably AGI-curious person has intuitions about this subject. Here
are mine:
Some people, especially those espousing a modular software-engineering type of
approach seem to think that a perceptual system basically should spit out a
token for chair when it sees a chair, and then a
One last bit of rambling in addition to my last post:
When I assert that almost everything important gets discarded while merely
distilling an array of rod and cone firings into a symbol for chair, it's
fair to ask exactly what that other stuff is. Alas, I believe it is
fundamentally
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation
of a programmable 2-input logic gate that you train using reinforcement
conditioning.
Is it ethical to compile and run this program?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Josh writes: http://www.netflixprize.com
Thanks for bringing this up! I had heard of it but forgot about it. While I
read about other people's projects/theories and build a robot for my own
project, this will be a fun way to refresh myself on statistical machine
learning techniques and
YKY writes:
There're several reasons why AGI teams are
fragmented and AGI designers don't want to
join a consortium:
A. believe that one's own AGI design is superior
B. want to ensure that the global outcome of AGI is friendly
C. want to get bigger financial rewards
D. There are
Mark Waser writes:
BTW, with this definition of morality, I would argue that it is a very rare
human that makes moral decisions any appreciable percent of the time
Just a gentle suggestion: If you're planning to unveil a major AGI initiative
next month, focus on that at the moment.
1 - 100 of 141 matches
Mail list logo