Rooftop8000 writes:
Yes, but software doesn't need to see or walk around because it lives
inside a computer. Why aren't you putting in echo-location or knowing how
to flaps wings?
In my opinion, those would be viable things to include in a proto-AGI. They
don't lead as directly to
David Clark writes:
I looked up SEXPR and the following is what I got.
I think he just was using shorthand for s expression.
Looking over the web page you linked to, it seems like your approach is
basically that building an AGI (at least an AGI of the type you are
pursuing) is at its heart
David Clark writes:
Everyone on this list is quite different.
It would be interesting to see what basic interests and views the members of
this list hold. For a few people, published works answer this pretty
clearly but that's not true for most list members.
I'll start.
I'm a
Ben Goertzel writes:
I don't think there are any good, general incremental tests for progress
toward
AGI. There are just too many different potentially viable approaches,
with
qualitatively different development arcs.
Nevertheless, I wish somebody would try to specify some that are perhaps
Nothing particularly original here, but I think
it's kind of interesting.
Suppose that at some point, basically by accident,
the brains of our ancestors became capable of supporting
the evolution of memes.
Biological evolution started with a LOOONG period of
low complexity creatures, during
Mike Tintner writes:
Let's call it the Neo-Maze Test.
I think this type of test is pretty interesting; the objection
if any is whether the capabilities of this robot are really
getting toward what we would like to consider general
intelligence.
For example, moving from the simple maze to
Richard Loosemore writes:
The best we can do is to use the human design as a close inspiration -- we
do not have to make an exact copy, we just need to get close enough to
build something in the same family of systems, that's all -- and set up
progress criteria based on how well we explain
Richard Loosemore writes:
I am talking about distilling the essential facts uncovered by cognitive
science into a unified formalism.
Just imagine all of your favorite models and theories in Cog Sci,
integrated in such a way that they become an actual system specification
instead of a
Richard Loosemore provides:
Interesting examples of what he is talking about.
Thanks, that makes what you are proposing much clearer. I'm not sure how
the essential facts are selected from the universe of facts, but making that
distinction is probably part of the process. I'm not very
Mike Tintner writes:
It goes ALL THE WAY. Language is backed by SENSORY images - the whole
range.
ALL your assumptions about how language can't be cashed out by images and
graphics will be similarly illiterate - or, literally, UNIMAGINATIVE.
I don't doubt that the visual and other sensory
Mike Tintner writes:
And.. by now you should get the idea.
And the all-important thing here is that if you want to TEST or question
the above sentence, the only way to do it successfully is to go back and
look at the reality. If you wanted to argue, well look at China, they're
rocketing
Bob Mottram writes:
When you're reading a book or an email I think what you're doing is
tieing your internal simulation processes to the stream of words
Then it would be crucial to understand these simulation processes.
For some very visual things I think I can follow what I think you
are
To elaborate a bit:
It seems likely to me that our minds work with the
mechanisms of perception when appropriate -- that
is, when the concepts are not far from sensory
modalities. This type of concept is basically all
that animals have and is probably most of what
we have.
Somehow, though, we
Bob Mottram writes:
Some things can be not so long as others.
...
Thanks for taking the time for such in-depth descriptions, but I am still
not clear what you are getting at. Much of what you write is a
context in which the meaning of a term might have been learned,
sometimes with multiple
Mark Waser writes:
Intelligence is only as good as your model of the world and what it allows
you to do (which is pretty much a paraphrasing of Legg's definition as far
as I'm concerned).
Since Legg's definition is quite explicitly careful not to say anything
at all about the internal
Ben Goertzel writes:
[Ben's research uses] a virtual robot in a sim world rather than a physical
robot in the real world.
Does your software get as input a rendered (but still visual) view of the
sim world, or does it have access to higher-level information about the
simulation? If the
Ben Goertzel writes:
... John Weng's SAIL project...
http://www.cse.msu.edu/~weng/
Thanks for that link, what an interesting-looking project! (I haven't gone
into it in depth yet but from the overview material it certainly seems to
qualify as AGI research). I'm thinking of playing around
Mike Tintner writes:
I don't know though - having still only glanced at [Hawkins's] stuff -
whether he has yet made the transition from being able to recognise a dog
to being able to recognize an animal. Anyone know about this?
I'm fairly certain the answer is no -- in fact as far as I know
Mike Tintner writes:
Wow. Really? He can't recognize a basic dog / cat etc? Are you sure?
Depends on what you mean by basic -- There is a demonstration that
classifies extremely simplified line drawings, which include dog and
cat.
Here's a document about it:
Mike Tintner writes:
Yes. Thanks. I had seen that. (And I still have to fully understand his
system). But my question remains: where did you get your information about
his system's FAILURES to recognize basic types?
?
I never said anything about failures. You asked
whether he has yet made
Ben Goertzel writes:
Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.
I did spend a few evenings looking around Second Life. From
that experience, I think that virtual protitutes would be
a more profitable product :)
On a less joking note, I think your ideas about applying your
cognitive engine to NPCs in RPG type games (online or otherwise)
could work out really well. The AI behind the game entities
that are supposedly people is depressingly stupid, and games
are a bazillion-dollar business.
I hope your
J Storrs Hall, PhD. writes:
As long as the trumpets are blaring, Beyond AI is coming out this month,
with
the coolest cover I've seen on any non-fiction book (he says modestly):
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117
Cool! I just pre-ordered my copy!
Look
J. Storrs Hall, PhD. writes:
I'm intending to do lo-level vision on (one) 8800 and everything else on my
(dual) Clovertowns.
Do you have any particular architectures / algorithms you're working on?
Your
approach and mine sound like there could be valuable shared effort...
First I'm going
J. Storrs Hall, PhD. writes:
NVIDIA claims half a teraflop for the 8800 gtx. You need an embarassingly
parallel problem, tho.
That claim is slightly bogus (I think they are figuring in some
graphics-specific feature which would rarely if ever be used by general
purpose algorithms [texture
David Clark writes:
I can predict with high accuracy what I will think on almost any topic.
People that can't, either don't know much about the principles they use to
think or aren't very rational. I don't use emotion or the current room
temperature to make decisions. (No implication
J. Storrs Hall writes:
Tommy, the scientific experiment and engineering project, is almost all
about concept formation.
Great project! While I'm not quite sure about meaning in the concept of
price-theoretical market equilibria thing, I really like your idea and it's
similar in broad
Bob Mottram writes: In order to differentiate this from the rest of the
robotics crowd you need to avoid building a specialised pinball playing robot.
I can't speak for JoSH, but I got the impression that playing pinball or
anything similar was not the object, the object was to provide real
[EMAIL PROTECTED] writes:
Help from anyone on this list with experience with the GNU toolchain on
ARM-based microcontrollers will be gratefully accepted :-)
I have a lot of such experience and would be happy to help out with whatever
you need. Post more details here if you think they
Matt Mahoney writes:
(sigh)
http://en.wikipedia.org/wiki/Scruffies
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
It would be nice to have a universal definition of general intelligence, but I
don't think we even share enough common intuition about what is intelligent or
what is general.
Instead what we seem to have is, for example, a definition based on uncertain
reasoning from somebody building an
Pei,
As part of my ongoing AGI education, I am beginning to study NARS in some
detail. As has been discussed recently here, you define intelligence as:
Intelligence is the capability of an information system to adapt to its
environment while operating with insufficient knowledge and
Pei Wang writes:
Thanks for the interest. I'll do my best to help, though since I'm on
vacation in China, I may not be able to process my emails as usual.
Thank you for your response. I'm planning over the course of the rest of the
year to look in-depth at all of the AGI projects that
The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
YKY writes:
I guess many are not so keen to join my project because they think opensource
makes it very hard to protect their ideas.Here's why I think nobody is
jumping on your project:
1) Those with ongoing projects likely see the costs (in terms of lost
proprietary interest and future
I got my copy in the mail last night, just flipped through it so far but it
looks pretty cool -- though I admit I'm probably more curious about your AGI
design ideas than the main topic!
If anybody checks out the stuff on KurzweialAI.net, I advise you to be careful
about the mind-x forum. I
Mark Waser writes:
. The project will be incorporated. The intent of the corporation is to 1)
protect the AGI and 2) to reward those who created it commensurate with their
contributions.Interesting setup. I fear that this and YKY's project will
have difficulty attracting contributors,
I stayed up late last night reading through J. Storrs Hall, PhD's new book
Beyond AI. It's a pleasant and mostly easy read. Partly this is because it is
written in a clear conversational style that goes down easy, and partly because
it is quite nontechnical. I found myself agreeing with
Lukasz Stafiniak writes:
What about: The ability to create information-based objects generating
income.
Sure. General intelligence would then refer to the range of object types it
can create. information-based could be omitted but it saves argument about
whether a chair factory should be
Mark waser writes:
P.S. You missed the time where Eliezer said at Ben's
AGI conference that he would sneak out the door before
warning others that the room was on fire:-)
You people making public progress toward AGI are very brave indeed! I wonder
if a time will come when the
Mark Waser writes:
BTW, with this definition of morality, I would argue that it is a very rare
human that makes moral decisions any appreciable percent of the time
Just a gentle suggestion: If you're planning to unveil a major AGI initiative
next month, focus on that at the moment.
Mark Waser writes:
I think that morality (aka Friendliness) is directly on-topic for *any* AGI
initiative; however, it's actually even more apropos for the approach that
I'm taking.
A very important part of what I'm proposing is attempting to deal with the
fact that no two humans agree
YKY writes:
There're several reasons why AGI teams are
fragmented and AGI designers don't want to
join a consortium:
A. believe that one's own AGI design is superior
B. want to ensure that the global outcome of AGI is friendly
C. want to get bigger financial rewards
D. There are
Josh writes: http://www.netflixprize.com
Thanks for bringing this up! I had heard of it but forgot about it. While I
read about other people's projects/theories and build a robot for my own
project, this will be a fun way to refresh myself on statistical machine
learning techniques and
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation
of a programmable 2-input logic gate that you train using reinforcement
conditioning.
Is it ethical to compile and run this program?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
I think probably AGI-curious person has intuitions about this subject. Here
are mine:
Some people, especially those espousing a modular software-engineering type of
approach seem to think that a perceptual system basically should spit out a
token for chair when it sees a chair, and then a
One last bit of rambling in addition to my last post:
When I assert that almost everything important gets discarded while merely
distilling an array of rod and cone firings into a symbol for chair, it's
fair to ask exactly what that other stuff is. Alas, I believe it is
fundamentally
9. a particular AGI theoryThat is, one that convinces me it's on the right
track.
Now that you have run this poll, what did you learn from the responses and how
are you using this information in your effort?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Robert Wensman writes:
Databases: 1. Facts: Contains sensory data records, and actuator records.
2. Theory: Contains memeplexes that tries to model the world.
I don't usually think of 'memes' as having a primary purpose of modeling the
world... it seems to me like the key to your whole
Robert Wensman writes:
Has there been any work done previously in statistical, example driven
deduction?
Yes. In this AGI community, Pei Wang's NARS system is exactly that:
http://nars.wang.googlepages.com/
Also, Ben Goertzel (et. al.) is building a system called Novamente
Ben Goertzel writes: http://www.nvidia.com/page/home.html Anyone know what
are the weaknesses of these GPU's as opposed to ordinary processors? They
are good at linear algebra and number crunching, obviously. Is there some
reason they would be bad at, say, MOSES learning?
These parallel
Moshe Looks writes: This is not quite correct; it really depends on the
complexity of the programs one is evolving and the structure of the fitness
function. For simple cases, it can really rock; see
http://www.cs.ucl.ac.uk/staff/W.Langdon/
That's interesting work, thanks for the link!
Responding to Edward W. Porter:
Thanks for the excellent message!
I am perhaps too interested in seeing what the best response from the field of
AGI might be to intelligent critics, and probably think of too many
conversations in those terms; I did not mean to attack or criticise your
Don Detrich writes:
AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us
Admittedly there are many possible dangers with future AGI technology. We can
think of a million horror stories and in all probability some of the problems
that will
I suppose I'd like to see the list management weigh in on whether this type of
talk belongs on this particular list or whether it is more appropriate for the
singularity list.
Assuming it's okay for now, especially if such talk has a technical focus:
One thing that could improve safety is to
Richard Loosemore writes: It is much less opaque. I have argued that this
is the ONLY way that I know of to ensure that AGI is done in a way that
allows safety/friendliness to be guaranteed. I will have more to say about
that tomorrow, when I hope to make an announcement.
Cool. I'm sure
Richard Loosemore writes: You must remember that the complexity is not a
massive part of the system, just a small-but-indispensible part. I think
this sometimes causes confusion: did you think that I meant that the whole
thing would be so opaque that I could not understand *anything* about
Edward W. Porter writes: To Matt Mahoney.
Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI
(which I assume from context is a reference to Recursive Self Improvement) is
necessary for general intelligence.
So could you, or someone, please define exactly
it a lot.
Date: Mon, 1 Oct 2007 11:34:09 -0400 From: [EMAIL PROTECTED] To:
agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content
Derek Zahn wrote: Richard Loosemore writes: You must remember
that the complexity is not a massive part of the system, just a
small
Richard Loosemore: a) the most likely sources of AI are corporate or
military labs, and not just US ones. No friendly AI here, but profit-making
and mission-performing AI. Main assumption built into this statement: that
it is possible to build an AI capable of doing anything except dribble
Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be
a matter of definition. But so far the several people who have gotten back to
me, including yourself, seem to take the position that that is not the type of
recursive self improvement they consider to be RSI. Some
I wrote:
If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the
extent that RSI is possible: the efficiency of the model implementation
and the capabilities of the model do not change.
An obvious objection to this
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen: 1) It acquires the knowledge
and skills to become a competent programmer, a task that takes a human many
years of directed training and practical experience. 2) It is
Tim Freeman: No value is added by introducing considerations about
self-reference into conversations about the consequences of AI engineering.
Junior geeks do find it impressive, though.
The point of that conversation was to illustrate that if people are worried
about Seed AI exploding, then
Linas Vepstas: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen:1) It acquires the
knowledge and skills to become a competent programmer, a task that takes a
human many years of directed training and practical experience. Wrong.
This
1. What is the single biggest technical gap between current AI and AGI?
I think hardware is a limitation because it biases our thinking to focus on
simplistic models of intelligence. However, even if we had more computational
power at our disposal we do not yet know what to do with it, and
A large number of individuals on this list are architecting an AGI solution
(or part of one) in their spare time. I think that most of those efforts do
not have meaningful answers to many of the questions, but rather intend to
address AGI questions from a particular perspective. Would such
Edward,
For some reason, this list has become one of the most hostile and poisonous
discussion forums around. I admire your determined effort to hold substantive
conversations here, and hope you continue. Many of us have simply given up.
-
This list is sponsored by AGIRI:
Hi Robin. In part it depends on what you mean by fast.
1. Fast - less than 10 years.
I do not believe there are any strong arguments for general-purpose AI being
developed in this timeframe. The argument here is not that it is likely, but
rather that it is *possible*. Some AI researchers,
Bryan Bishop: Looks like they were just simulating eight million neurons with
up to 6.3k synapses each. How's that necessarily a mouse simulation, anyway?
It isn't. Nobody said it was necessarily a mouse simulation. I said it was
a simulation of a mouse-brain-like structure. Unfortunately,
Richard Loosemore writes: Okay, let me try this. Imagine that we got a
bunch of computers [...]
Thanks for taking the time to write that out. I think it's the most
understandable version of your argument that you have written yet. Put it on
the web somewhere and link to it whenever the
Dennis Gorelik writes: Derek, I quoted this Richard's article in my blog:
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
Cool. Now I'll quote your blogged response:
So, if low level brain design is incredibly complex - how do we copy it? The
answer is:
Richard Loosemore writes: This becomes a problem because when we say of
another person that they meant something by their use of a particular word
(say cat), what we actually mean is that that person had a huge amount of
cognitive machinery connected to that word cat (reaching all the way
Ben,
It seems to me that Novamente is widely considered the most promising and
advanced AGI effort around (at least of the ones one can get any detailed
technical information about), so I've been planning to put some significant
effort into understanding it with a view toward deciding whether
Ben Goertzel writes: The PLN book should be out by that date ... I'm currently
putting in some final edits to the manuscript... Also, in April and May
I'll be working on a lot of documentation regarding plans for OpenCog.
Thanks, I look forward to both of these.
[EMAIL PROTECTED] writes:
But it should be quite clear that such methods could eventually be very handy
for AGI.
I agree with your post 100%, this type of approach is the most interesting
AGI-related stuff to me.
An audiovisual perception layer generates semantic interpretation on the
Stephen Reed writes:
How could a symbolic engine ever reason about the real world *with* access
to such information?
I hope my work eventually demonstrates a solution to your satisfaction.
Me too!
In the meantime there is evidence from robotics, specifically driverless
cars,
Related obliquely to the discussion about pattern discovery algorithms What
is a symbol?
I am not sure that I am using the words in this post in exactly the same way
they are normally used by cognitive scientists; to the extent that causes
confusion, I'm sorry. I'd rather use words in
Mark Waser writes:
True enough, that is one answer: by hand-crafting the symbols and the
mechanics for instantiating them from subsymbolic structures. We of
course hope for better than this but perhaps generalizing these working
systems is a practical approach. Um. That is what is
Jim Bromer writes: With God's help, I may have discovered a path toward a
method to achieve a polynomial time solution to Logical Satisfiability
If you want somebody to talk about the solution, you're
more likely to get helpful feedback elsewhere as it is not a
topic that most of us on this
Steve Richfield, writing about J Storrs Hall:
You sound like the sort that once the things is sort of
roughed out, likes to polish it up and make it as good as possible.
I don't believe your characterization is accurate. You could start with this
well-done book to check that opinion:
Steve Richfield writes:
Hmm, I haven't seen a reference to those core publications. Is there a
semi-official list?
This list is maintained by the Artificial General Intelligence Research
Instutute. See www.agiri.org . On that site there are several semi-official
lists -- under
Note that the Instead of an AGI Textbook section is hardly fleshed out at all
at this point, but it does link to a more-complete similar effort to be found
here:
http://nars.wang.googlepages.com/wang.AGI-Curriculum.html
---
agi
Archives:
William Pearson writes: Consider an AI learning chess, it is told in plain
english that...
I think the points you are striving for (assuming I understand what you mean)
are very important and interesting. Even the first simplest steps toward this
clear and (seemingly) simple task baffle me.
Ben Goertzel writes:
it might be valuable to have an integration of Player/Stage/Gazebo with
OpenSim
I think this type of project is a good start toward addressing one of the major
critiques of the virtual world approach -- the temptation to (unintentionally)
cheat -- those canned
One more bit of ranting on this topic, to try to clarify the sort of thing I'm
trying to understand.
Some dude is telling my AGI program: There's a piece called a 'knight'. It
moves by going two squares in one direction and then one in a perpendicular
direction. And here's something neat:
Stephen Reed writes:
Hey Texai, let's program
[Texai] I don't know how to program, can you teach me by yourself?
Sure, first thing is that a program consists of statements that each does
something
[Texai] I assume by program you mean a sequence of instructions that a
computer can interpret and
Vladimir Nesov writes: Generating concepts out of thin air is no big deal,
if only a resource-hungry process. You can create a dozen for each episode,
for example.
If I am not certain of the appropriate mechanism and circumstances for
generating one concept, it doesn't help to suggest that a
Richard Loosemore: I do not laugh at your misunderstanding, I laugh at the
general complacency; the attitude that a problem denied is a problem solved.
I laugh at the tragicomedic waste of effort.
I'm not sure I have ever seen anybody successfully rephrase your complexity
argument back at
Josh writes: You see, I happen to think that there *is* a consistent, general,
overall theory of the function of feedback throughout the architecture. And I
think that once it's understood and widely applied, a lot of the
architectures (repeat: a *lot* of the architectures) we have floating
Richard Loosemore:
I'll try to tidy this up and put it on the blog tomorrow.
I'd like to pursue the discussion and will do so in that venue after your post.
I do think it is a very interesting issue. Truthfully I'm more interested in
your specific program for how to succeed than this
J Andrew Rogers writes: Most arguments and disagreements over complexity are
fundamentally about the strict definition of the term, or the complete
absence thereof. The arguments tend to evaporate if everyone is forced to
unambiguously define such terms, but where is the fun in that.
I agree
Richard: I get tripped up on your definition of complexity:
A system contains a certain amount of complexity in it if it
has some regularities in its overall behavior that are governed
by mechanisms that are so tangled that, for all practical purposes,
we must assume that we will never
Mark Waser: Huh? Why doesn't engineering discipline address building complex
devices?
Perhaps I'm wrong about that. Can you give me some examples where engineering
has produced complex devices (in the sense of complex that Richard means)?
---
agi
Me: Can you give me some examples where engineering
has produced complex devices (in the sense of complex
that Richard means)?
Mark: Computers. Anything that involves aerodynamics.
Richard, is this correct? Are human-engineered airplanes complex in the sense
you mean?
Mark Waser:
I don't know what is going to be more complex than a variable-geometry-wing
aircraft like a F-14 Tomcat. Literally nothing can predict it's aerodynamic
behavior.
The avionics are purely reactive because it's future behavior cannot be
predicted
to any certainty even at
Richard Loosemore: it makes no sense to ask is system X complex?. You can
only ask how much complexity, and what role it plays in the system.
Yes, I apologize for my sloppy language. When I say is system X complex?
what I mean is whether the RL-complexity of the system is important in
The little Barsalou I have read so far has been quite interesting, and I think
there are a lot of good points there, even if it is a rather extreme position.
The issue of how concepts (which is likely a nice suitcase word lumping a lot
of discrete or at least overlapping cognitive functions
I assume you are referring to Mike Tintner.
As I described a while ago, I *plonk*ed him myself a long time ago, most mail
programs have the ability to do that. and it's a good idea to figure out how to
do it with your own email program.
He does have the ability to point at other thinkers and
Thanks, what an interesting project. Purely on the mechanical side, it shows
how far away we are from truly flexible house-friendly robust mobile robotic
devices.
I'm a big fan of the robotic approach myself. I think it is quite likely that
dealing with the messy flood of dirty data coming
1 - 100 of 141 matches
Mail list logo