Is anybody working on building ethical capacity into AGI from the
ground up?
As I mentioned to Ben yesterday, AGIs without ethics could end up
being the next decade's e-viruses (on steriods).
Cheers, Philip
My thoughts on this are at
www.goertzel.org/dynapsyc/2002/AIMorality.htm
One trouble with this endeavor is that AGI is a fuzzy set...
However, I'd be quite interested to see this list, even so.
In fact, I think it'd be more valuable to simply see a generic list of all
AGI projects, be they commercial or non.
If anyone wants to create such a list, I'll be happy to
Hi,
Inspired by a recent post, here is my attempt at a list of serious AGI
projects underway on the planet at this time.
If anyone knows of anything that should be added to this list, please let me
know.
. Novamente ...
· Pei Wangs NARS system
· Peter Vosss A2I2
David Noziglia wrote:
It is a common belief that game theory has shown that it is
advantageous to
be selfish and nasty. I assume that the members of this group
know that is
wrong, that game theory has in fact shown that in a situation of repeated
interaction, it is more advantageous from
I think the key fact is that most of these projects are currently
relatively inactive --- plenty of passion out there, just not a
lot of resources.
The last I heard both the HAL project and the CAM-brain project
where pretty much at a stand still due to lack of funding?
That is correct.
I
James Rogers wrote:
You would quite obviously be correct about the tractability if someone
actually tried to brute force the entire algorithm space in L. The
knowability factor means that we don't always (hardly ever?) get the
best algorithm, but it learns and adapts very fast and this
Paul
Werbos,
I
agree fully with your comments on the nature of
"intelligence"
Clearly, "intelligence" is a natural language concept -- it's fuzzy,
ambiguous, and it was created by humans for a certain collection of
purposes.
It was
not created for dealing with other species loosely as
Paul P
wrote:
***
That "I" have not demonstrated precise
definitions to the phenomenon of life or the phenomenon of intelligence is not a
proper argument that you are right about the possibility of computer
life.
***
Of course it isn't an argument that I'm
right.
If by "prove" you're
I
wrote
***
There is a difference between Novamente
and abstractions about Novamente
There is a difference
between my brain and abstractions about my brain
A
computer running aNovamente software system is not an abstraction, nor is
my brain
Both
a computer running aNovamente system,
Shane Legg wrote:
Whose to say what intelligence *really* is. My approach is simply
to try to define something that captures with precision aspects of
the fuzzy concept of intelligence, name it something else, and then
get on with the job. If nobody/everybody thinks that what I have
Here is a notice for an interesting AGI-related conference in a kick-ass
location ;-)
Though they don't use the term AGI, it seems they are specifically looking
for people to come present papers on AGI-related topics
The call for papers uses language such as autonomous behavior and
cognitive
BG Optimizing the optimizer is what we've called supercompiling the
BG supercompiler, it's a natural idea, but we're not there yet.
I didn't mean supercompiling the supercompiler but rather, evolving
the supercompiler through known techniques such as GA. I have no idea
how feasible
Charles Hixson wrote (in response to me):
-- create a flexible knowledge representation (KR) useful for
representing
all forms of knowledge (declarative, procedural, perceptual, abstract,
linguistic, explicit, implicit, etc. etc.)
This probably won't work. Thinking of the brain as a
Arthur Murray wrote:
If Ben Goertzel and the rest of the Novamente team build up
an AI that mathematically comprehends mountains of data,
they may miss the AI boat by not creating persistent concepts
that accrete and auto-prune over time as the basis of NLP.
No, even before the Novamente
***
Perhaps most subscribers are on top of/well beyond this material
already but you might be interested in two new books:
A new kind of science Stephen Wolfram 2002
ISBN: 1-57955-008-8
***
My reaction to this book is summarized in the review I wrote right after it
come out.
it's on my
Title: Message
Hi,
We
have a bunch of languages that are tailored for particular
purposes
To
feed Novamente data right now, we use either nmsh (Novamente shell) scripts or
XML (using special tags that map into Novamente nodes and links). Psynese
could be represented in either of these
Alan Grimes wrote:
In 2003, our data entry activities will be emphasized as a result of
our participation in Darpa's Total Information Awarenesss program for
which we will construct a Terrorism Knowledge Base, containing all the
open-source terrorist individuals, organizations and events
Stephen Read wrote:
As Cycorp is the best funded company among those organizations with AGI as
their primary goal, I would state that for us enrichment is not the
motive.
Steve, I accept this as an honest statement of your personal motivations.
However, I'm not sure that Cycorp's investors
Alan
The next question is: What's your corresponding estimate of processing
power?
To emulate the massively parallel information update rate of the brain on
N bits of memory, how many commodity PC processors are required per GB of
RAM?
Ben G
Ben Goertzel wrote:
A short, interesting
:55 PM
Subject: Re: [agi] How wrong are these numbers?
Ben Goertzel wrote:
The next question is: What's your corresponding estimate of processing
power?
Thanks for the prompt.
Lets use the number 2^30 for the size of the memory which will require
25 operations for each 32 bit word
. But lets not underestimate the human mind,
small m, in
the meantime. No one has come even close to matching it yet.
Sorry for the length and for babbling..
Kevin
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 03, 2002 6:59 PM
Kevin wrote:
I will go as far to say that any computer system we develop, even one
that realizes all the promises of the singularity, can only match the
capacity of the human Mind. Why? Because the universe is the Mind
itself, and the computational capacity of the universe is rather
Kevin wrote:
I think Ben is closer to
anyone in having a true mapping of the
brain and its capabilities. As to whether it ultimately develops the
emergent qualities we speak of..time will tell...even
if it falls short of singularity type hype, i believe it can provide
tremendous benefits
is not specifically devoted to my own approach
to AGI.
Some
other approaches to AGI are described at
www.cyc.com
www.adaptiveai.com/
www.cogsci.indiana.edu/farg/peiwang/papers.html
http://www.singinst.org/CFAI/
[This is a small, unsystematic list with very many
important omissions]
-- Ben
I am impressed that they have actually taken the step of
integrating their
logic-based memory, inference and learning framework with a real system
with
sensors and actuators, however. Ultimately, this sort of work
may reveal
to
them the weakness of their cognitive mechanisms and
that DB, and maintain it
carefully!!! One of these days I'll be wanting to talk to you about an
arrangement for sharing it ;)
-- Ben Goertzel
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Kevin Copple
Sent: Sunday, December 08, 2002 4:11 AM
Gary Miller wrote:
I also agree that the AGI approach of modeling and creating a self
learning system is a valid bottom up approach to AGI. But it is much
harder for me with my limited mathematical and conceptual knowledge of
the research to grasp how and when these systems will be able
of radical anti-AI people emerge with hostile
intent. Another good reason to not be so visible!!
Kevin
- Original Message -
From:
Ben Goertzel
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 11:26
AM
Subject: RE: [agi] AI on TV
)? Perhaps we can co-ordinate our efforts somehow.
Peter
http://adaptiveai.com/
-Original Message-
Behalf Of Ben Goertzel
... [Although, in fact, Tony Lofthouse is coding up a simple 2D
training-world right now, just to test
some of the current Novamente cognitive functions
True. The more fundamental point is that symbols representing entities and
concepts need to be grounded with (scalar) attributes of some sort.
How this is *implemented* is a practical matter. One important
consideration
for AGI is that data is easily retrievable by vector distance
Mike,
Actually, I'm not sure what the difference is between your proposal and
the "game" proposal.
In
games like SimAnt or Creatures, the game-play basically just consists of
exploring a simulated environment full of other
organisms
I
think your suggestion is basically in the same
Hi
folks,
Sorry
about the spam ;-(
I just
logged onto listbox.com and changed the AGI list policy from
*
anyone may post
to
*subscribers only may post
This
should eliminate the spam problem. It may be annoying for people who have
multiple e-mail addresses, but I don't have a
Shane,
I agreed with the wording in your earlier post more ;)
It is true that learning Esperanto would be easier for an AI than learning
English or Italian.
However, I think that if you had an AI capable of mastering the
syntax-semantics-pragmatics interface [the really hard part of language,
Gary Miller wrote:
***
I guess I'm still having trouble with the concept of grounding. If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in
books and on the web. It is now an idiot savant in that it knows all
about hydrogen and
nothing about
This message from James Rogers seems to have gone to SL4 instead of AGI ...
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of James
Rogers
Sent: Sunday, December 29, 2002 8:39 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Early Apps.
On 12/29/02 4:22 PM,
Gary Miller wrote:
I agree that as humans we bring a lot of general knowledge with us when
we learn a new domain. That is why I started off with the general
conversational domain and am now branching into science, philosophy,
mathematics and history. And of course the AI can not make all
Kevin Copple wrote:
Thinking in humans, much like genetic evolution, seems to involve
predominately trial and error. Even the logic we like to use is more
often than not faulty, but can lead us to try something different. And
example of popular logic that is invariably faulty is reasoning
Kevin Copple wrote:
I do not want to say that random trial and error is the ultimate form of
intelligent thought. Far from it. But given what nature and
humankind have
achieved with it to date, and that we may not even recognize the extent to
which it is involved in our own thought, it
Well, animal languages are not languages in the same sense as human
languages... We humans don't really know them very well, and it seems to me
that they would be VERY hard for an AI to use effectively unless that AI
were embodied in a close simulation of an appropriate animal body. Animal
processing, social interaction, temporal event processing, etc.
etc. etc. This means that it would not work as well taken outside of its
ordinary social and physical situations. But it means that its limited
resources are generally well deployed within its usual
environments.
--
Ben Goertzel.
intelligence ;-)
-- Ben
Goertzel
***
Well, in
Novamente we are not coding*specific knowledge*thatis
learnable... but we are coding implicit knowledge as to what sorts of learning
processes are most useful in which specialized subdomains...
---
I don't know, from where
I sit this distinction is artificial. Learning is
this is roughly proportional to
CPU speed (until one reaches the point where human attention to interpret
the test results is the rate-limiting factor).
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member
This will
occur before the predictions of the experts in the field of Singularity
prediction because their predictions are based on a constant Moore's Law and
they over estimate the computational capacity required for human level
AGI. Their dates vary from 2016 to 2030 depending on
or Pei or Peter...
For example, this paper by Mark Gubrud from 1997
http://www.csr.umd.edu/~mgubrud/nanosec1.html
uses the term repeatedly, in basically the same sense we're using it now.
I would bet he was not the first to use it either...
-- Ben Goertzel
-Original Message-
From
-
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, January 08, 2003 6:38 PM
Subject: RE: [agi] Q: Who coined AGI?
I guess most AI researchers consider AI to be inclusive of AGI and ASI.
That's Ok with me ... ASI is interesting too, though quite
different
.
Rationality is about what one does to fulfill one's goals -- morals and
ethics are about what the goals are, in the first place. Benevolence and
respect for all forms of life need to be there *in the goal system*. Not
hardwired in, in any sense -- rather, taught and fully internalized.
-- Ben
Kevin is correct. You'd need a system that had birdlike perceptual organs,
and was able to gather data similar to the data birds gather. Then it could
learn to make the calls birds made in proper situated context. *This* would
constitute a beginning understanding of bird language.
-- Ben
as an amoeba ???
These issues were extensively discussed on the global brain discussion group
e-mail list, a few years back.
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Hi,
I think that to suggest that evolutionary wiring is the root of
our problems
is suspect at best. there are many great beings who have walked
this earth
that were subject to the same evolution, yet not at the whim of the
destructive emotions.
Causality is a very subtle notion
I agree with your ultimate objective, the big question is *how* to do it.
What is clear is that no one has any idea that seems to be guaranteed to
work in creating an AGI with these qualities. We are currently
resigned to
let's build it and see what happens. Which is quite scary for some,
, the
creation of an AGI that is benevolent toward humans and other lifeforms.
-- Ben
Ben Goertzel wrote:
3) an intention to implement a careful AGI sandbox that we
won't release
our AGI from until we're convinced it is genuinely benevolent
Ben, that doesn't even work on *me*. How many
Agreed, Tim, no sandbox environment can be sufficient for determining
benevolence.
Such an environment can only be a heuristic guide.
We will gather data about an AGI's benevolence from its behavior in the
sandbox, and from our knowledge of its internal state. And we will make our
best
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
* An intelligent system distinguishes self from other
* A wise and intelligent system realizes that self and other are distinct,
but also the same
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL
Alan Grimes wrote:
My position is that you don't really need friendly AI, you simply need
to neglect to include the take over world motovator...
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities
to serve people. (Biomind LLC's alpha
Biomind Toolkit product, based on parts of the incomplete Novamente system,
will be done in March... its goal is to serve biologists with great data
analyses based on a broad integrative view of biological data... see
www.biomind.com).
- Ben Goertzel
Kevin Copple wrote:
It seems clear that AGI will be obtained in the foreseeable
future. It also
seems that it will be done with adequate safeguards against a
runaway entity
that will exterminate us humans. Likely it will remain under our control
also.
HOWEVER, this brings up another
Ben Goertzel wrote:
Since I'm too busy studying neuroscience, I simply don't have any
time for learning operating systems. I will therefore either use the
systems I know or the systems that require the least ammount of effort
to learn regardless of their features.
Alan, that sounds
Eliezer wrote:
James Rogers wrote:
Your intuition is correct, depending on how strict you are about
knowledge. The intrinsic algorithmic information content of any
machine is greater (sometimes much greater) than the algorithmic
information content of its static state. The intrinsic
Kevin Copple wrote:
Perhaps I am wrong, but my impression is that the talk here about
AGI sense
of self, AGI friendliness, and so on is quite premature.
Attitudes on that vary, I think...
I know that many AGI researchers agree with you, and think such issues are
best deferred till after some
resource bound issue.
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Sensitive robots taught to gauge human emotion
http://www.eet.com/story/OEG20030107S0033
NASHVILLE, Tenn. #151; Robotics designers are working with
psychologists here at Vanderbilt University to improve human-machine
interfaces by teaching robots to sense human emotions. Such
sensitive robots
At
www.santafe.edu/~shalizi/notebooks/ cellular-automata.html
Wolfram's book is reviewed as a rare blend of monster raving egomania
and utter batshit insanity ... (a phrase I would like to have
emblazoned on my gravestone, except that I don't plan on dying, and if I
do die I plan on being
Shane Legg wrote, responding to Pei Wang:
Perhaps where our difference is best highlighted is in the
following quote that you use:
“something can be computational at one level,
but not at another level” [Hofstadter, 1985]
To this I would say: Something can LOOK like computation
Pei:
For that level issue, one way to see it is through the concept
of virtual
machine. We all know that at a low level computer only has procedural
language and binary data, but at a high level it has
non-procedural language
(such as functional or logical languages) and decimal data.
Pei wrote:
Right. Again let's use NARS as a concrete example. It can answer
questions,
but if you ask the same question twice to the system at different
time, you
may get different answers. In this sense, there is no algorithm that takes
the question as input, and produces an unique
Once again, the interesting question is not Is NARS a TM?, but
Is NARS a
TM with respect to problem P? If the problem is To answer
Ben's email on
`AI and compuation', then the system is not a TM (though it may
be a TM in
many other senses). For this reason, to discuss the computability
In Novamente, this skeptical attitude has two aspects:
1) very high level schemata that must be taught not programmed
2) some basic parameter settings that will statistically tend
to incline the
system toward skepticism of its own conclusions [but you can't
turn the dial
too far in
If this physical interpretation of the Church-Turing thesis
is accepted then it follows that if the physical brain and its
operation is a well defined process then it must be possible
to implement the process that the brain carries out on a Turing
machine. This is the claim of Strong AI.
SYMETRY: All output channels are associated with at least one
input/feedback mechanism.
SEMANTIC RELATIVITY: The primary symantic foundation of the system is
the input and output systems. (almost everything is expressed in terms
of input and output at some level..)
TEMPORALITY: Both input
the list out of annoyance at your
posts...
-- Ben Goertzel
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Alan Grimes
Sent: Friday, January 17, 2003 7:09 PM
To: [EMAIL PROTECTED]
Subject: [agi] Subordinant Intelligence
om
ATTN: Members
Hey Kevin,
I am not unsubbing anyone!
This is my first time running a public list, so please be tolerant ... I'm
sure I'll get it down to a science in time ;)
Since I made that first post regarding Alan, I have received two messages
from people telling me how interesting and useful they found
iential learning" theme in
which an AI gains intelligence through living in, and acting and interacting in,
the world.
-- Ben
Goertzel
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of C. David
NozigliaSent: Tuesday, January 21, 2003 9:51
ECTED]Sent: Friday, January 31, 2003 4:21
PMTo: [EMAIL PROTECTED]Subject: [agi] An Artificial
General Intelligence in the MakingNearly
finished reading "An Artificial General Intelligence in the Making" Ben
Goertzel, et. al. (Lost the link, because I saved it to my local
HD.)M
Spirit isn't emergent, and isn't everywhere, and isn't a figment of the
imagination, and isn't supernatural. Spirit refers to a real thing,
with a real explanation; it's just that the explanation is very, very
difficult.
--
Eliezer S. Yudkowsky http://singinst.org/
Title: Message
Mapping NL into logical format is very hard; the hard
part is not choosing the textual representation of the logic, the hard part is
have the computer program understand the natural language in the first
place!!!
Yeah,
I do have some ideas on how this could be accomplished,
To me, the weaving-together of components with truly general intelligence
capability, and specialized-intelligence components built on top of these,
is the essence of AGI design.
-- Ben Goertzel
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Eliezer
picture I'm painting, of the
interweaving of generality and specialization in the mind. But I think this
kind of complicatedness is the lot of a finite mind in a (comparatively)
essentially unboundedly complex universe...
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily
Hi Philip,
I agree that a functionally-specialized Ethics Unit could make sense in an
advanced Novamente configuration.
Essentially, it would just be a unit concerned with GoalNode refinement --
creation of new GoalNodes embodying subgoals of the GoalNodes embodying
basic ethical principles.
My idea is that action-framing and environment-monitoring are carried
out in a unified way in Units assigned to these tasks generically.
..ethical thought gets to affect system behavior indirectly
through a), via ethically-motivated GoalNodes, both general ones and
the same, and my children don't interpret
them exactly the same as me in spite of my explicit implicit moral
instruction. Similarly, an AGI will certainly have its own special twist on
the theme...
-- Ben G
Ben Goertzel wrote:
However, it's to be expected that an AGI's ethics
Hi,
I see that Novamente has Context and NumericalContext Links, but
I'm wondering if something more is needed to handle the various
subtypes of context?
yeah, those link types just deal with certain special situations, they are
not the whole of Novamente's contextuality-handling mechanism,
Bill Hibbard wrote:
On Mon, 10 Feb 2003, Ben Goertzel wrote:
A goal in Novamente is a kind of predicate, which is just a
function that
assigns a value in [0,1] to each input situation it observes...
i.e. it's a
'valuation' ;-)
Interesting. Are these values used
Eliezer wrote:
* a paper by Marcus Hutter giving a Solomonoff induction based theory
of general intelligence
Interesting you should mention that. I recently read through Marcus
Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
formal definition of intelligence,
2) While an AIXI-tl of limited physical and cognitive capabilities
might serve as a useful tool, AIXI is unFriendly and cannot be made
Friendly regardless of *any* pattern of reinforcement delivered during
childhood.
Before I post further, is there *anyone* who sees this besides me?
-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Ben Goertzel
Sent: Tuesday, February 11, 2003 4:33 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] unFriendly AIXI
The formality of Hutter's definitions can give the impression
that they cannot evolve. But they are open
The harmfulness or benevolence of an AIXI system is therefore
closely tied
to the definition of the goal that is given to the system in advance.
Actually, Ben, AIXI and AIXI-tl are both formal systems; there is no
internal component in that formal system corresponding to a goal
Given
this, would you regard AIXI as formally approximating the kind of goal
learning that Novamente is supposed to do?
Sorta.. but goal-learning is not the complete motivational structure of
Novamente... just one aspect
As Definition 10 makes clear, intelligence is defined relative
to
Hi,
The reason I asked the question was not to ask whether AIXI is
pragmatically better as a design strategy than Novamente. What I was
asking you rather is if, looking at AIXI, you see something
*missing* that
would be present in Novamente. In other words, *if* you had an
infinitely
Oh, well, in that case, I'll make my statement more formal:
There exists a physically realizable, humanly understandable challenge C
on which a tl-bounded human outperforms AIXI-tl for humanly
understandable
reasons. Or even more formally, there exists a computable process P
which, given
So what clever loophole are you invoking?? ;-)
An intuitively fair, physically realizable challenge with important
real-world analogues, solvable by the use of rational cognitive reasoning
inaccessible to AIXI-tl, with success strictly defined by reward (not a
Friendliness-related issue).
It seems to me that this answer *assumes* that Hutter's work is completely
right, an assumption in conflict with the uneasiness you express in your
previous email.
It's right as mathematics...
I don't think his definition of intelligence is the maximally useful one,
though I think it's a
I can spot the problem in AIXI because I have practice looking for silent
failures, because I have an underlying theory that makes it immediately
obvious which useful properties are formally missing from AIXI, and
because I have a specific fleshed-out idea for how to create
moral systems
Your intuitions say... I am trying to summarize my impression of your
viewpoint, please feel free to correct me... AI morality is a matter of
experiential learning, not just for the AI, but for the programmers.
Also, we plan to start Novamente off with some initial goals embodying
ethical
Hi,
2) If you get the deep theory wrong, there is a strong possibility of a
silent catastrophic failure: the AI appears to be learning
everything just
fine, and both you and the AI are apparently making all kinds of
fascinating discoveries about AI morality, and everything seems to be
As has been pointed out on this list before, the military IS interested in
AGI, and primarily for information integration rather than directly
weapons-related purposes.
See
http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf
for example.
-- Ben G
I can't imagine the military would be
Steve, Ben, do you have any gauge as too what kind of grants are hot
right now or what kind of narrow AI projects with AGI implications have
recently been funded through military agencies?
The list would be very long. Just look at the DARPA IPTO website for
starters...
there by 2050 or so... ;-)
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Even if a (grown) human is playing PD2, it outperforms AIXI-tl playing
PD2.
Well, in the long run, I'm not at all sure this is the case. You haven't
proved this to my satisfaction.
In the short run, it certainly is the case. But so what? AIXI-tl is damn
slow at learning, we know that.
The
1 - 100 of 1549 matches
Mail list logo