Re: [singularity] The establishment line on AGI

2008-01-14 Thread Benjamin Goertzel
Also, this would involve creating a close-knit community through conferences, journals, common terminologies/ontologies, email lists, articles, books, fellowships, collaborations, correspondence, research institutes, doctoral programs, and other such devices. (Popularization is not on the

[singularity] Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Hi, From Bob Mottram on the AGI list: However, I'm not expecting to see the widespread cyborgisation of human society any time soon. As the article suggests the first generation implants are all devices to fulfill some well defined medical need, and will have to go through all the usual

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Benjamin Goertzel
On Oct 30, 2007 7:17 AM, Mike Tintner [EMAIL PROTECTED] wrote: Yes, I thought we disagreed. To be clear: I'm saying - no society and culture, no individual intelligence. The individual is part of a complex - in the human case - VAST social web. (How ironic, Ben, that you could be asserting

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Benjamin Goertzel
Try find a single example of any form of intelligence that has ever existed in splendid individual isolation. That is so wrong an idea - like perpetual motion - so fundamental to the question of superAGI's. (It's also a fascinating philosophical issue). Oh, I see ... super-AGI's have

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Benjamin Goertzel
Well put. (BTW as perspective here, I should point out that what I've raised calls for a whole new branch/dimension of social psychology - the study of collective intelligence. Not new to everyone ;-) http://en.wikipedia.org/wiki/Collective_intelligence - This list is sponsored by

Re: [singularity] Why SuperAGI's ..P.S.

2007-10-30 Thread Benjamin Goertzel
Mike, you've got me all wrong, in this particular regard!! My practical plan for creating AGI does in fact involve creating a society of AGI's, living in online virtual worlds like Second Life and Metaplace ... (Although, these AGI's will be able to share thoughts with each other, in a kind of

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-29 Thread Benjamin Goertzel
No AGI or agent can truly survive and thrive in the real world, if it is not similarly part of a collective society and a collective science and technology - and that is because the problems we face are so-o-o problematic. Correct me, but my impression of all the discussion here is that it

Re: [singularity] Pernar's supergoal

2007-10-28 Thread Benjamin Goertzel
. Kind regards, Stefan On 10/27/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: To move the chat in a different direction, here is Stephan Pernar's articulates self-improving AGI supergoal, drawn from his paper Benevolence-- A Materialist Philosophy of Goodness, which is linked

Re: [singularity] Re: CEV

2007-10-27 Thread Benjamin Goertzel
In other words: if we ever get to a point where the model advocated by Stefan Pernar could be implemented, we are at a point where implementing CEV is also possible! This is not necessarily true ... IMO this statement evolves an excessive confidence regarding the relative capabilities of

[singularity] Novamente CTO Cassio Pennachin announces the Singularity is due in 2 months

2007-10-27 Thread Benjamin Goertzel
A little light humor courtesy of Zebulon Goertzel, age 14 ... http://www.youtube.com/watch?v=dXZw0hwIQWY ;-) ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote: My one sentence summary of CEV is: What would a better me/humanity want? Is that in line with your understanding? No... I'm not sure I fully grok Eliezer's intentions/ideas, but I will summarize here the current idea I have of CEV..

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
So a VPOP is defined to be a safe AGI. And its purpose is to solve the problem of building the first safe AGI... No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal of carrying out a certain kind of extrapolation What you are doubting, perhaps, is that it is

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Benjamin Goertzel
On 10/24/07, Mike Tintner [EMAIL PROTECTED] wrote: Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those which try to dress themselves up in some semblance of scientific reasoning. All this speculation, for example, about the friendliness and

Re: [singularity] PhD Ideas

2007-10-24 Thread Benjamin Goertzel
Hi, Right now, doing any serious AI stuff in virtual worlds definitely requires some serious programming However, here is one suggestion: perhaps you could focus on the environment rather than the AI itself, and you could design a learning environment for AI systems in virtual worlds. For

Re: [singularity] PhD Ideas

2007-10-24 Thread Benjamin Goertzel
btw... An alternative to SL might be Metaplace, which seems to have a more solid software architecture, but it's still only in alpha so I can't say for sure how useful it will be... ben On 10/24/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Hi, Right now, doing any serious AI stuff

Re: [singularity] Artificial Genital Intelligence

2007-10-08 Thread Benjamin Goertzel
into the thing, and then I just decided to play along instead of insisting on aborting it. -- Ben G On 10/8/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: which raises the question... what was the point of accepting the interview? -- Original message from Benjamin Goertzel [EMAIL

[singularity] Job position open at Novamente LLC

2007-09-27 Thread Benjamin Goertzel
Hi all, Novamente LLC is looking to fill an open AI Software Engineer position. Our job ad is attached (it will be placed on the website soon). Qualified and interested applicants should send a resume and cover letter to me at [EMAIL PROTECTED] However, please read the ad carefully first to be

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-28 Thread Benjamin Goertzel
On 7/12/07, Panu Horsmalahti [EMAIL PROTECTED] wrote: It is my understanding that the basic problem in Friendly AI is that it is possible for the AI to interpret the command help humanity etc wrong, and then destroy humanity (what we don't want it to do). The whole problem is to find some way

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-21 Thread Benjamin Goertzel
(Echoing Joshua Fox's request:) Ben, could you also tell us where you disagree with Eliezer? Eliezer and I disagree on very many points, and also agree on very many points, but I'll mention a few key points here. (I also note that Eliezer's opinions tend to be a moving target, so I can't say

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-21 Thread Benjamin Goertzel
Hi, So, er, do you have an alternative proposal? Even if the probability of A or B is low, if there are no alternatives other than doom by old age/nanowar/asteroid strike/virus/whatever, it is still worthwhile to pursue them. Note that I don't know how we could go about calculating what the

Re: [singularity] Bootstrapping AI

2007-06-04 Thread Benjamin Goertzel
No, the problem is, the theoretical framework of AI just isn't there. The AI academics have nothing to deliver. I think that's a bit unfair. AI academics are a large and heterogeneous group. The AI funding mechanisms are broken pretty badly, so that those AI academics with the most clues

[singularity] Donations to support open-source AGI research (SIAI Research Program)

2007-06-02 Thread Benjamin Goertzel
Hi all, I suppose many of you are aware that I've become involved with the Singularity Institute for AI over the last few months. It's been a pretty low-key involvement so far: What I did was basically to design for them a Research Program http://www.singinst.org/research/summary [see menu to

Re: [singularity] The humans are dead...

2007-05-28 Thread Benjamin Goertzel
Unfortunately, I have come to agree with Keith on this issue. Discussing issues like this [comparative moral value of humans versus superhuman AGIs] on public mailing lists seems fraught with peril for anyone who feels they have a serious chance of actually creating AGI. Words are slippery, and

Re: [singularity] Friendly question...

2007-05-25 Thread Benjamin Goertzel
Well, the term Friendliness as introduced by Eliezer Yudkowsky is supposed to roughly mean beneficialness to humans. What you are talking about is a quite different thing, just the AI top-level goal of minimizing entropy As it happens, I think that is a poorly formulated goal, and if I were

[singularity] Bad Friendly AI poetry ;-)

2007-05-25 Thread Benjamin Goertzel
On the Dangers of Incautious Research and Development A scientist, slightly insane Created a robotic brain But the brain, on completion Favored assimilation His final words: Damn, what a pain! ... http://www.goertzel.org/blog/blog.htm - This list is sponsored by AGIRI:

Re: [singularity] Bad Friendly AI poetry ;-)

2007-05-25 Thread Benjamin Goertzel
once shook his head and exclaimed My career is now dead; for although my AI has an IQ that's high it insists it exists to be bred! Benjamin Goertzel wrote: On the Dangers of Incautious Research and Development A scientist, slightly insane Created a robotic brain But the brain, on completion

Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Benjamin Goertzel
Peter Thiel, a businessman I know who is a damn good chess player, told me the same story about chess. Now he is a financial trader, and feels he can outperform software in this domain. But when software can outperform him at trading, he'll get sick of that too. What will be left for

Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Benjamin Goertzel
What will be left for unaugmented, non-uploaded humans after computers can outdo them in all intellectual and athletic tasks? Art and sex, I would suppose ;-) After all it's still fun to learn to play Bach even though Wanda Landowska did it better... -- Ben G Basically, humans will have

Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Benjamin Goertzel
Shane, Thankyou for being patronizing. Some of us do understand the AIXI work in enough depth to make valid criticism. The problem is that you do not understand the criticism well enough to address it. Richard Loosemore. Richard, While you do have the math background to understand the

Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-14 Thread Benjamin Goertzel
If this is so, then where are the great, working AI algorithms that we supposedly already have that run very slowly or can only be run on Blue Gene-type supercomputers? Can you name a single, important, functioning AI algorithm that requires a supercomputer to run? Genetic programming can

Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-13 Thread Benjamin Goertzel
FYI, I am scheduled to give a talk to Google's tech staff on AGI and Novamente, later in the month... I believe Eliezer may be speaking there sometime soon, also... -- Ben On 5/13/07, Joshua Fox [EMAIL PROTECTED] wrote: Private companies like Google are, as far as I am aware, spending

Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-10 Thread Benjamin Goertzel
Matt, A couple comments... 1) SIAI does not currently have an active AGI engineering project going, though it may well hatch one in future. As well as potentially hatching its own AGI engineering project, SIAI may also engage in research partnerships with private AGI research efforts, such as

[singularity] Singularity; the video game

2007-04-28 Thread Benjamin Goertzel
-- Forwarded message -- From: Benjamin Goertzel [EMAIL PROTECTED] Date: Apr 28, 2007 3:59 PM Subject: Re: tvix To: Zarathustra Goertzel [EMAIL PROTECTED] heh ... a good concept for a game, but apparently not too awesomely executed yet... On 4/28/07, Zarathustra Goertzel [EMAIL

Re: [singularity] Future of AI?

2007-04-26 Thread Benjamin Goertzel
I posted some thoughts on that book when it first came out: http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm Since that time I've had a chance to talk to some neuroscientists about Hawkins' book and also to look at his team's publicly released code. Some thoughts:

Re: [singularity] News bit: Carnegie Mellon unveils Internet-controlled robots anyone can build

2007-04-26 Thread Benjamin Goertzel
there are available wifi connections ;-) ben g On 4/26/07, Mike Tintner [EMAIL PROTECTED] wrote: Could these robots be connected up to a network of Net computers so as to massively extend their mental capabilities? - Original Message - *From:* Benjamin Goertzel [EMAIL PROTECTED

[singularity] ANNOUNCEMENT: ARTIFICIAL GENERAL INTELLIGENCE 2008 CONFERENCE

2007-04-26 Thread Benjamin Goertzel
Hi all, It's my pleasure to announce a conference that I'm helping to co-organize... ** The First Conference on Artificial General Intelligence, aka AGI-08 ** Information may be found at the website http://agi-08.org/ It will be in early March 2008 at the University of Memphis, and we have

[singularity] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-04-24 Thread Benjamin Goertzel
I also don't think you will recognize AGI. You have never seen examples of it. Earlier I posted examples of Google passing the Turing test, but nobody believes that is AGI. If nothing is ever labeled AGI, then nothing ever will be. Google does not pass the Turing test. Giving human-like

Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Benjamin Goertzel
are considered, more realistic goals can be set] - Original Message - *From:* Benjamin Goertzel [EMAIL PROTECTED] *To:* singularity@v2.listbox.com *Sent:* Tuesday, April 24, 2007 9:50 PM *Subject:* Re: [singularity] Why do you think your AGI design will work? Hi, We don't have any

Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Benjamin Goertzel
, which might seem most efficient, but in cinematic dreams. And so, almost certainly do animal minds). I reckon an AGI whose skills were in various ways navigational, like those of the earliest animals, would be a far more realistic target. - Original Message - *From:* Benjamin Goertzel

[singularity] Torboto - the Torture Robot

2007-04-23 Thread Benjamin Goertzel
Hopefully not the future of AGI... http://www.crooksandliars.com/2007/04/22/torboto-the-robot-that-tortures-people/ [Warning: contents could be offensive to some... crude humor etc. ...] -- Ben G - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [singularity] Human Intelligence

2007-04-18 Thread Benjamin Goertzel
Hi Steve, I don't know of any list focused specifically on human augmentation. However, this list is not focused on any particular class of technologies, and discussions of human augmentation are very welcome here! -- Ben G [list owner] On 4/18/07, stephen white [EMAIL PROTECTED] wrote: Is

Re: [singularity] Chaitin randomness

2007-01-19 Thread Benjamin Goertzel
Hey gts, I think this topic is more appropriate for agi@v2.listbox.com which you can sign up for at the same place as you signed up for this Singularity email list. The reason is that foundations of probability is a highly technical issue of relevance to AGI engineering; whereas this email