Also, this would involve creating a close-knit community through
conferences, journals, common terminologies/ontologies, email lists,
articles, books, fellowships, collaborations, correspondence, research
institutes, doctoral programs, and other such devices. (Popularization is
not on the
Hi,
From Bob Mottram on the AGI list:
However, I'm not expecting to see the widespread cyborgisation of
human society any time soon. As the article suggests the first
generation implants are all devices to fulfill some well defined
medical need, and will have to go through all the usual
On Oct 30, 2007 7:17 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Yes, I thought we disagreed.
To be clear: I'm saying - no society and culture, no individual
intelligence. The individual is part of a complex - in the human case -
VAST social web. (How ironic, Ben, that you could be asserting
Try find a single example of any form of intelligence that has ever
existed in splendid individual isolation. That is so wrong an idea - like
perpetual motion - so fundamental to the question of superAGI's. (It's
also a fascinating philosophical issue).
Oh, I see ... super-AGI's have
Well put. (BTW as perspective here, I should point out that what I've
raised
calls for a whole new branch/dimension of social psychology - the study of
collective intelligence.
Not new to everyone ;-)
http://en.wikipedia.org/wiki/Collective_intelligence
-
This list is sponsored by
Mike, you've got me all wrong, in this particular regard!!
My practical plan for creating AGI does in fact involve creating a society
of AGI's, living in online virtual worlds like Second Life and Metaplace ...
(Although, these AGI's will be able to share thoughts with each other, in a
kind of
No AGI or agent can truly survive and thrive in the real world, if it is
not similarly part of a collective society and a collective science and
technology - and that is because the problems we face are so-o-o
problematic. Correct me, but my impression of all the discussion here is
that it
.
Kind regards,
Stefan
On 10/27/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
To move the chat in a different direction, here is Stephan Pernar's
articulates self-improving AGI supergoal, drawn from his paper
Benevolence--
A Materialist Philosophy of Goodness, which is linked
In other words: if we ever get to a point where the model advocated by
Stefan Pernar could be implemented, we are at a point where
implementing CEV is also possible!
This is not necessarily true ... IMO this statement evolves an excessive
confidence regarding the relative capabilities of
A little light humor courtesy of Zebulon Goertzel, age 14 ...
http://www.youtube.com/watch?v=dXZw0hwIQWY
;-)
ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote:
My one sentence summary of CEV is: What would a better me/humanity
want?
Is that in line with your understanding?
No...
I'm not sure I fully grok Eliezer's intentions/ideas, but I will summarize
here the
current idea I have of CEV..
So a VPOP is defined to be a safe AGI. And its purpose is to solve the
problem of building the first safe AGI...
No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal
of carrying out a certain kind of extrapolation
What you are doubting, perhaps, is that it is
On 10/24/07, Mike Tintner [EMAIL PROTECTED] wrote:
Every speculation on this board about the nature of future AGI's has been
pure fantasy. Even those which try to dress themselves up in some
semblance
of scientific reasoning. All this speculation, for example, about the
friendliness and
Hi,
Right now, doing any serious AI stuff in virtual worlds definitely requires
some serious programming
However, here is one suggestion: perhaps you could focus on the
environment rather than the AI itself, and you could design a learning
environment for AI systems in virtual worlds. For
btw...
An alternative to SL might be Metaplace, which seems to have a more solid
software architecture, but it's still only in alpha so I can't say for sure
how useful it will be...
ben
On 10/24/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Hi,
Right now, doing any serious AI stuff
into
the thing, and then I just decided to play along instead of insisting
on aborting it.
-- Ben G
On 10/8/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
which raises the question... what was the point of accepting the interview?
-- Original message from Benjamin Goertzel [EMAIL
Hi all,
Novamente LLC is looking to fill an open AI Software Engineer position.
Our job ad is attached (it will be placed on the website soon).
Qualified and interested applicants should send a resume and cover
letter to me at [EMAIL PROTECTED] However, please read the ad
carefully first to be
On 7/12/07, Panu Horsmalahti [EMAIL PROTECTED] wrote:
It is my understanding that the basic problem in Friendly AI is that it is
possible for the AI to interpret the command help humanity etc wrong, and
then destroy humanity (what we don't want it to do). The whole problem is to
find some way
(Echoing Joshua Fox's request:) Ben, could you also tell us where you
disagree with Eliezer?
Eliezer and I disagree on very many points, and also agree on very
many points, but I'll mention a few key points here.
(I also note that Eliezer's opinions tend to be a moving target, so I
can't say
Hi,
So, er, do you have an alternative proposal? Even if
the probability of A or B is low, if there are no
alternatives other than doom by old
age/nanowar/asteroid strike/virus/whatever, it is
still worthwhile to pursue them. Note that I don't
know how we could go about calculating what the
No, the problem is, the theoretical framework of AI just isn't there. The
AI academics have nothing to deliver.
I think that's a bit unfair. AI academics are a large and heterogeneous
group. The AI funding mechanisms are broken pretty badly, so that those AI
academics with the most clues
Hi all,
I suppose many of you are aware that I've become involved with the
Singularity Institute for AI over the last few months.
It's been a pretty low-key involvement so far: What I did was basically to
design for them a Research Program
http://www.singinst.org/research/summary
[see menu to
Unfortunately, I have come to agree with Keith on this issue.
Discussing issues like this [comparative moral value of
humans versus superhuman AGIs] on public
mailing lists seems fraught with peril for anyone who feels they
have a serious chance of actually creating AGI.
Words are slippery, and
Well, the term Friendliness as introduced by Eliezer Yudkowsky
is supposed to roughly mean beneficialness to humans.
What you are talking about is a quite different thing, just the AI
top-level goal of minimizing entropy
As it happens, I think that is a poorly formulated goal, and if I
were
On the Dangers of Incautious Research and Development
A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: Damn, what a pain!
...
http://www.goertzel.org/blog/blog.htm
-
This list is sponsored by AGIRI:
once shook his head
and exclaimed My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!
Benjamin Goertzel wrote:
On the Dangers of Incautious Research and Development
A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Peter Thiel, a businessman I know who is a damn good chess player, told me
the same story about chess.
Now he is a financial trader, and feels he can outperform software in this
domain.
But when software can outperform him at trading, he'll get sick of that too.
What will be left for
What will be left for unaugmented, non-uploaded humans after computers can
outdo
them in all intellectual and athletic tasks?
Art and sex, I would suppose ;-)
After all it's still fun to learn to play Bach even though Wanda Landowska
did it
better...
-- Ben G
Basically, humans will have
Shane,
Thankyou for being patronizing.
Some of us do understand the AIXI work in enough depth to make valid
criticism.
The problem is that you do not understand the criticism well enough to
address it.
Richard Loosemore.
Richard,
While you do have the math background to understand the
If this is so, then where are the great, working AI
algorithms that we supposedly already have that run
very slowly or can only be run on Blue Gene-type
supercomputers? Can you name a single, important,
functioning AI algorithm that requires a supercomputer
to run?
Genetic programming can
FYI, I am scheduled to give a talk to Google's tech staff on AGI and
Novamente,
later in the month...
I believe Eliezer may be speaking there sometime soon, also...
-- Ben
On 5/13/07, Joshua Fox [EMAIL PROTECTED] wrote:
Private companies like Google are, as far as I am aware,
spending
Matt,
A couple comments...
1)
SIAI does not currently have an active AGI engineering project going,
though it may well hatch one in future.
As well as potentially hatching its own AGI engineering project, SIAI
may also engage in research partnerships with private AGI research
efforts, such as
-- Forwarded message --
From: Benjamin Goertzel [EMAIL PROTECTED]
Date: Apr 28, 2007 3:59 PM
Subject: Re: tvix
To: Zarathustra Goertzel [EMAIL PROTECTED]
heh ... a good concept for a game, but apparently not too awesomely executed
yet...
On 4/28/07, Zarathustra Goertzel [EMAIL
I posted some thoughts on that book when it first came out:
http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm
Since that time I've had a chance to talk to some neuroscientists
about Hawkins' book and also to look at his team's publicly released code.
Some thoughts:
there are available wifi
connections ;-)
ben g
On 4/26/07, Mike Tintner [EMAIL PROTECTED] wrote:
Could these robots be connected up to a network of Net computers so as to
massively extend their mental capabilities?
- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED
Hi all,
It's my pleasure to announce a conference that I'm helping to co-organize...
** The First Conference on Artificial General Intelligence, aka AGI-08 **
Information may be found at the website
http://agi-08.org/
It will be in early March 2008 at the University of Memphis, and we have
I also don't think you will recognize AGI. You have never seen examples
of
it. Earlier I posted examples of Google passing the Turing test, but
nobody
believes that is AGI. If nothing is ever labeled AGI, then nothing ever
will
be.
Google does not pass the Turing test. Giving human-like
are considered, more
realistic goals can be set]
- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* singularity@v2.listbox.com
*Sent:* Tuesday, April 24, 2007 9:50 PM
*Subject:* Re: [singularity] Why do you think your AGI design will work?
Hi,
We don't have any
, which might seem most efficient,
but in cinematic dreams. And so, almost certainly do animal minds).
I reckon an AGI whose skills were in various ways navigational, like those
of the earliest animals, would be a far more realistic target.
- Original Message -
*From:* Benjamin Goertzel
Hopefully not the future of AGI...
http://www.crooksandliars.com/2007/04/22/torboto-the-robot-that-tortures-people/
[Warning: contents could be offensive to some... crude humor etc. ...]
-- Ben G
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
Hi Steve,
I don't know of any list focused specifically on human augmentation.
However, this list is not focused on any particular class of
technologies, and discussions of human augmentation are very welcome
here!
-- Ben G [list owner]
On 4/18/07, stephen white [EMAIL PROTECTED] wrote:
Is
Hey gts,
I think this topic is more appropriate for
agi@v2.listbox.com
which you can sign up for at the same place as you signed up for this
Singularity email list.
The reason is that foundations of probability is a highly technical
issue of relevance to AGI engineering; whereas this email
42 matches
Mail list logo