[singularity] Test of mail-archive.com

2006-09-08 Thread Ben Goertzel
No need to reply to this exciting message, I'm just testing if archiving works ;-) ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[singularity] Is Friendly AI Bunk?

2006-09-09 Thread Ben Goertzel
nked above, will be much appreciated and enjoyed (All points of view will be accepted openly, of course: although I am hosting this new list, my goal is not to have a list of discussions mirroring my own view, but rather to have a list I can learn something from.) Yours, Ben Goertzel --- To

Re: [singularity] Re: Is Friendly AI Bunk?

2006-09-10 Thread Ben Goertzel
Hi Aleksei, In this thread [see the end of this email for quote], I feel you are attacking a peripheral part of Shane's argument. The problems with the feasibility of defining or implementing "Friendly AI" that Shane is presenting in his main blog entry, are not really dependent on his prefator

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, It follows that the AIXItl algoritm applied to friendliness would be effectively more friendly than any other time t and space bounded agent. Personally I find that satisfying in the sense that once "compassion", "growth" and "choice" or the classical "friendliness" has been defined an opti

[singularity] Switching aging off in stem cells

2006-09-11 Thread Ben Goertzel
Meanwhile, the biologists continue making excellent, steady progress toward understanding how the curse of aging operates http://www.hhmi.org//news/morrison20060906.html " A single molecular switch plays a central role in inducing stem cells in the brain, pancreas, and blood to lose function

[singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, On Ben's essay: Ben is arguing that due to incomputable complexity 'friendliness' can only be guranteed under unsatisfactory narrow circumstances. Independent of one agrees or not, it would follow that if this is the case then substituting friendliness with one or all of the alternative goal

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Since we assume that AIXItl is effectively better at achieving its goal than any other agent with the same space and time resource limitations the specific values for t and l do not matter. Do they? > In short: it's some pretty math with some conceptual evocativeness, > but not of any pragmatic v

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
The subtle question raised by the AI/human fusion approach is: Once you become something vastly more intelligent and general than "human", in what sense are you still "you" ... ?? In what sense have you simply killed yourself slowly (or not so slowly) and replaced yourself with something cleverer

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Lucio wrote: In order to produce strong AI, though, we need to understand the mind from low to high levels, or understand the processes that make high levels emerge from low ones. That seems a much tortuous scientific path, one that cannot be achieved by conventional and comparatively predictabl

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Hi, Just for kicks - let's assume that AIXItl yields 1% more intelligent results when provided 10^6 times the computational resources when compared to another algorythm X. Let's further assume that today the cost asscociated with X for reaching a benefit of 1 will be 1 compared to a cost of 10^6

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel
Thanks Ben, Russel et al for being so patient with me ;-) To summarize: AIXItl's inefficiencies are so large and the additional benefit it provides is so small that it will likely never be a logical choice over other more efficient, less optimal algorithms. Stefan The additional benefit it *wou

Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Ben Goertzel
When the purse-strings open, and the money flows, it will flow like tax dollars, bequests, and donations do -- toward politically tenable projects. Yudkowsky's Friendliness theory, whether you agree with it's technical feasibility or not, is very effectively positioning the Singularity Institute's

Re: Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Ben Goertzel
Dr. Omni wrote: In particular cases a less intelligent entity is perfectly able to predict the behavior of a more intelligent one. For instance, my cats are less intelligent than me (or so I hope ;-) and they can predict several of my actions and take decisions based on that. For instance "Lúcio

Re: Re: [singularity] Is Friendly AI Bunk?

2006-09-14 Thread Ben Goertzel
In my view, thinking too much about whether one can prove that a system is friendly or not is getting a bit ahead of ourselves. What we need first is a formal definition of what friendly means. Then we can try to figure out whether or not we can prove anything. I think we should focus on the pro

Re: [singularity] Re: Is Friendly AI Bunk?

2006-09-14 Thread Ben Goertzel
On 9/14/06, Anna Taylor <[EMAIL PROTECTED]> wrote: Ben wrote: I don't think that Friendliness, to be meaningful, needs to have a compact definition. Anna's questions: Then how will you build a "Friendly AI"? If I wanted to build an AI embodying my own personal criterion of Friendliness (or "be

Re: [singularity] Optimization targets

2006-09-15 Thread Ben Goertzel
Hi, After reading KnowabilityOfFAI and perhaps coming to an Awful Realization, it seems Friendliness is plausible with strict criteria for an optimization target. It also seems an optimization target is necessary, regardless, with more or less strict criteria. This passage from KnowabilityOfFA

[singularity] Excerpt from a work in progress by Eliezer Yudkowsky

2006-09-15 Thread Ben Goertzel
:43 PM Subject: Please fwd to Singularity list To: Ben Goertzel <[EMAIL PROTECTED]> Ben, please forward this to your Singularity list. ** Excerpts from a work in progress follow. ** Imagine that I'm visiting a distant city, and a local friend volunteers to drive me to the airport.

[singularity] An interesting but currently failed attempt to prove provable Friendly AI impossible...

2006-09-17 Thread Ben Goertzel
Check out the new post and dialogue at www.vetta.org Shane posted a draft proof that provable Friendly AI is impossible (in a certain sense) ... but Eliezer found an oversight in the proof ;-) A fun read... Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe o

Re: [singularity] Proof of the impossibility of proving that friendliness is not provable

2006-09-19 Thread Ben Goertzel
Right, but then when your program encounters an evil alien who says "I'll destroy Earth unless you say 'Cheese!' ", your program winds up taking an action that isn't very benevolent after all... This gets at the distinction between outcome-based and action-based Friendliness, which I alluded to e

[singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-23 Thread Ben Goertzel
Hi, I have been considering co-authoring some verbiage aimed at explaining the Singularity notion to intelligent, educated non-nerds (together with a writer I know who is more experienced and expert than me at writing for a non-technical audience). Of course this has been done before, e.g. it ha

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-23 Thread Ben Goertzel
From what I've seen the Kurzweil approach is among the most effective... if by "Singularity" you mean "smarter than human intelligence making everything fly out the window", only a couple hundred people even understand this, and most of them arrived at it through Staring Into the Singularity. Hm

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-23 Thread Ben Goertzel
Mark, you do have a good point. The viability of speculations about future tech cannot be rationally assessed by people who simply lack knowledge about current science and technology. I think Kurzweil does do an excellent job in this regard: his book spends a lot of time just educating the reade

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-24 Thread Ben Goertzel
Olie, It seems to me that the time-scale issue is very critical here, and is indeed the most dubious aspect of popular Singularitarian prognostications. It's quite possible to accept that a) the advent of greater than human intelligence will likely lead to a total transformation of reality, min

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-25 Thread Ben Goertzel
Peter Voss wrote: I have a more fundamental question though: Why in particular would we want to convince people that the Singularity is coming? I see many disadvantages to widely promoting these ideas prematurely. If one's plan is to launch a Singularity quickly, before anyone else notices, the

Re: [singularity] i'm new

2006-10-08 Thread Ben Goertzel
Hi, i'm very interested in following and joining diiscussions about 2012. cheers aLe mu(RaRo) Regarding 2012 ... while this list is open to discussion of *every* aspect of the Singularity, as list owner I would like to maintain a focus on the Singularity in the Vinge-ean sense, meaning Sing

Re: [singularity] i'm new

2006-10-09 Thread Ben Goertzel
Hi, On 10/9/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote: Just a sidebar on the whole 2012 topic. It's quite possible that singularity is **already here** as new knowledge and that the only barrier is social acceptance. Radical new knowledge is historically created long before it is accepted by

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel
Hi, The reason that so many in the intellectual community see Singularity discussion as garbage is because there is so little definitional consensus that it's close to impossible to determine what's actually being discussed. I doubt this... I think the reason that Singularity discussion is di

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel
Hank, On 10/10/06, Hank Conn <[EMAIL PROTECTED]> wrote: The all-encompassing definition of the Singularity is the point at which an intelligence gains the ability to recursively self-improve the underlying computational processes of its intelligence. I already have that ability -- I'm just ver

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel
ingly... This doesn't mean compact definitions aren't useful in some contexts, just that they should not be interpreted to fully capture the concepts to which they are attached... -- Ben G On 10/10/06, BillK <[EMAIL PROTECTED]> wrote: On 10/10/06, Ben Goertzel wrote: > > But

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel
On the other hand (to add a little levity to the conversation), a very avid 2012-ite I knew last year informed me that "You should just mix eight ounces of Robitussin with eight ounces of vodka and drink it fast -- you'll find your own private Singularity, right there!!" ;-pp On 10/10/06, Lúci

[singularity] Minds beyond the Singularity: literally self-less ?

2006-10-10 Thread Ben Goertzel
In something I was writing today, for a semi-academic publication, I found myself inserting a paragraph about how unlikely it is that superhuman AI's after the Singularity will possess "selves" in anything like the sense that we humans do. It's a bit long and out of context, but the passage in wh

Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Ben Goertzel
MAIL PROTECTED]> wrote: How much of our "selves" are driven by biological processes that an AI would not have to begin with, for example...fear? I would think that the AI's self would be fundamentaly different to begin with due to this. It may never have to modify itself to achie

Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Ben Goertzel
Hi, In regard to your "finally" paragraph, I would speculate that advanced intelligence would tend to converge on a structure of increasing stability feeding on increasing diversity. As the intelligence evolved, a form of natural selection would guide its structural development, not toward incr

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, Mike Deering wrote: If you really were interested in working on the Singularity you would be designing your education plan around getting a job at the NSA.  The NSA has the budget, the technology, the skill set, and the motivation to build the Singularity.  Everyone else, universities, priva

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Japan, despite a lot of interest back in 5th Generation computer days seems to have a difficult time innovating in advanced software.  I am not sure why.   I talked recently, at an academic conference, with the guy who directs robotics research labs within ATR, the primary Japanese government resea

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, I know you must be frustrated with fund raising, but investor relunctance is understandable from the perspective that for decadesnow there has always been someone who said we're N years from fullblown AI, and then N years passed with nothing but narrow AI progress.Of course, someone will end up

Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
I think Mark's observation is correct.  Anti-aging is far easier to fund than AGI because there are a lot more people interested in preserving their own lives than in creating AGI  Furthermore, the M-prize money is to fund a **prize**, not directly to fund research on some particular project...

Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Michael,I think your summary of the situation is in many respects accurate; but, an interesting aspect you don't mention has to do with the disclosure of technical details...In the case of Novamente, we have sufficient academic credibility and know-how that we could easily publish a raft of journal

[singularity] AGI funding: US versus China

2006-10-23 Thread Ben Goertzel
Hi, As a contrast to this discussion on why AGI is hard to fund in the US, I note that Hugo de Garis has recently relocated to China, where he was given a professorship and immediately given the "use" of basically as many expert programmers/researchers as he can handle. Furthermore, I have strong r

Re: [singularity] AGI funding: US versus China

2006-10-23 Thread Ben Goertzel
that you're close to finishing your project, I'd have guards posted in the server room.  Things could get scary really quickly.  Josh Treadwell Ben Goertzel wrote: Hi, As a contrast to this discussion on why AGI is hard to fund in the US, I note that Hugo de Garis has recently

Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Hi, > Ditto with just about anything else that's at all innovative -- e.g. was> Einstein's General Relativity a fundamental new breakthrough, or just a> tweak on prior insights by Riemann and Hilbert?I wonder if this is a sublime form of irony for a horribly naïve and arrogant analogy to GR I drew

Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Though I have remained often-publiclyopposed to emergence and 'fuzzy' design since first realising what the true consequences (of the heavily enhanced-GA-based system I was workingon at the time) were, as far as I know I haven't made that particularmistake again.Whereas, my view is that it is preci

Re: Re: [singularity] Defining the Singularity

2006-10-24 Thread Ben Goertzel
Loosemore wrote: > The motivational system of some types of AI (the types you would > classify as tainted by complexity) can be made so reliable that the > likelihood of them becoming unfriendly would be similar to the > likelihood of the molecules of an Ideal Gas suddenly deciding to split > int

[singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel
http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/singularity/ Tuesday 24 October 2006, 9pm on BBC Two "Meet the scientific prophets who claim we are on the verge of creating a new type of human - a human v2.0. "It's predicted that by 2029 computer intelligence will equal the powe

Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel
l donated something like 15 grand to SIAI a while back). http://www.sl4.org/archive/0206/4015.html I also think if you are expecting the Singularity in 2029 or after, you might be in for quite an early surprise. Ugh.. the poll on the website says "Whose vision do you believe: Kurzweil o

Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel
On 10/24/06, Russell Wallace <[EMAIL PROTECTED]> wrote: On 10/24/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: > I know Hugo de Garis pretty well personally, and I can tell you that > he is certainly not "loony" on a personal level, as a human being. > He's a

Re: Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel
Right - for the record when I use words like "loony" in this sort of context I'm not commenting on how someone might come across face to face (never having met him), nor on what a psychiatrist's report would read (not being a psychiatrist) - I'm using the word in exactly the same way that I would

Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread Ben Goertzel
HI, About hybrid/integrative architecturs, Michael Wilson said: I'd agree that it looks good when you first start attacking the problem. Classic ANNs have some demonstrated competencies, classic symbolic AI has some different demonstrated competencies, as do humans and existing non-AI software.

Re: Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel
Hi Richard, I have left that email sitting in my Inbox, and skimmed it over, but did not find time to read it carefully and respond to it yet. I only budget myself a certain amount of time per day for recreational emailing (and have been exceeding that limit this week, already ;-) I hope t

[singularity] Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel
systems) are certainly NOT reliable in terms of Friendliness or any other subtle psychological property... -- Ben G On 10/25/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: Ben Goertzel wrote: > Loosemore wrote: >> > The motivational system of some types of AI (the types you

Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel
Hi, Do most in the filed believe that only a war can advance technology to the point of singularity-level events? Any opinions would be helpful. My view is that for technologies involving large investment in manufacturing infrastructure, the US military is one very likely source of funds. But

Re: Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Ben Goertzel
Hi, The problem, Ben, is that your response amounts to "I don't see why that would work", but without any details. The problem, Richard, is that you did not give any details as to why you think your proposal will "work" (in the sense of delivering a system whose Friendliness can be very confid

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel
Hi, There is something about the gist of your response that seemed strange to me, but I think I have put my finger on it: I am proposing a general *class* of architectures for an AI-with-motivational-system. I am not saying that this is a specific instance (with all the details nailed down) of

[singularity] Fwd: "After Life" by Simon Funk

2006-10-29 Thread Ben Goertzel
FYI -- Forwarded message -- From: Eliezer S. Yudkowsky <[EMAIL PROTECTED]> Date: Oct 30, 2006 12:14 AM Subject: "After Life" by Simon Funk To: [EMAIL PROTECTED] http://interstice.com/~simon/AfterLife/index.html An online novella, with hardcopy purchaseable from Lulu. Theme: Upl

Re: Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel
Hi Richard, Let me go back to start of this dialogue... Ben Goertzel wrote: Loosemore wrote: > The motivational system of some types of AI (the types you would > classify as tainted by complexity) can be made so reliable that the > likelihood of them becoming unfriendly would be s

Re: [singularity] Re: Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel
Hi, I feel a little sad, however, that you simultaneously bow out of the debate AND fire some closing shots, in the form of a new point (the issue of whether or not this is "proof") and some more complaints about the "vague statements" in my emails. I clearly cannot reply to these, because you

[singularity] DC Future Salon - Metaverse Roadmap - Weds Nov 8, 7-9 PM

2006-10-31 Thread Ben Goertzel
For anyone in the DC area, the following event may be interesting... Not directly AGI-relevant, but interesting in that one day virtual worlds like Second Life may be valuable for AGI in terms of giving them a place to play around and interact with humans, without need for advanced robotics... -

[singularity] Goertzel meets Sirius

2006-10-31 Thread Ben Goertzel
Me, interviewed by R.U. Sirius, on AGI, the Singularity, philosophy of mind/emotion/immortality and so forth: http://mondoglobo.net/neofiles/?p=78 Audio only... -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://

[singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel
Hi, For anyone who is curious about the talk "Ten Years to the Singularity (if we Really Really Try)" that I gave at Transvision 2006 last summer, I have finally gotten around to putting the text of the speech online: http://www.goertzel.org/papers/tenyears.htm The video presentation has been o

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel
incredibly challenging and extraordinary task, but this the impression which comes across in the talk. Yours, Joshua 2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>: > > Hi, > > For anyone who is curious about the talk "Ten Years to the Singularity > (if we Really Really

Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel
oftware is often not documented or easily digestable, but it seems like one of the most efficient ways to attack the software development problem. Bo On Mon, 11 Dec 2006, Ben Goertzel wrote: ) Hi Joshua, ) ) Thanks for the comments ) ) Indeed, the creation of a thinking machine is not a

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel
the numbers raw or divided by the population size? -Chuck On 12/11/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Hi, > > For anyone who is curious about the talk "Ten Years to the Singularity > (if we Really Really Try)" that I gave at Transvision 2006 last > summ

Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel
Hi, You mention "intermediate steps to AI", but the question is whether these are narrow-AI applications (the bane of AGI projects) or some sort of (incomplete) AGI. According the approach I have charted out (the only one I understand), the true path to AGI does not really involve commercially

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel
BTW Ben, for the love of God, can you please tell me when your AGI book is coming out? It's been in my Amazon shopping cart for 6 months now! The publisher finally mailed me a copy of the book last week! Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-15 Thread Ben Goertzel
Well, the requirements to **design** an AGI on the high level are much steeper than the requirements to contribute (as part of a team) to the **implementation** (and working out of design details) of AGI. I dare say that anyone with a good knowledge of C++, Linux, and undergraduate computer scien

Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-20 Thread Ben Goertzel
Yes, this is one of the things we are working towards with Novamente. Unfortunately, meeting this "low barrier" based on a genuine AGI architecture is a lot more work than doing so in a more bogus way based on an architecture without growth potential... ben On 12/20/06, Joshua Fox <[EMAIL PROTEC

[singularity] Storytelling, empathy and AI

2006-12-20 Thread Ben Goertzel
This post is a brief comment on PJ Manney's interesting essay, http://www.pj-manney.com/empathy.html Her point (among others) is that, in humans, storytelling is closely tied with empathy, and is a way of building empathic feelings and relationships. Mirror neurons and other related mechanisms

Re: [singularity] Vinge & Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Ben Goertzel
Joshua Fox wrote: Any comments on this: http://news.com.com/2100-11395_3-6160372.html Google has been mentioned in the context of AGI, simply because they have money, parallel processing power, excellent people, an orientation towards technological innovation, and important narrow AI success

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Ben Goertzel
Matt Mahoney wrote: --- Jef Allbright <[EMAIL PROTECTED]> wrote: On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: What I argue is this: the fact that Occam's Razor holds suggests that the universe is a computation. Matt - Would you please clarify how/why you think B follows

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel
Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: What I wanted was a set of non-circular definitions of such terms as "intelligence" and "learning", so that you could somehow *demonstrate* that your mathematical idealization of these terms corresp

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel
Richard, I long ago proposed a working definition of intelligence as "Achieving complex goals in complex environments." I then went through a bunch of trouble to precisely define all the component terms of that definition; you can consult the Appendix to my 2006 book "The Hidden Pattern"..

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Ben Goertzel
Hi Shane, did become possible, won't the Block argument then become a serious problem? If you did have infinite computation then you could just build an AIXI and be done. There would be no point in building a different system that was provably less powerful and yet more complex to construct.

[singularity] Uselessness of AIXI

2007-03-06 Thread Ben Goertzel
This would be the paper, everyone: http://www.vetta.org/documents/IDSIA-12-06-1.pdf Shane - first you smack down the Goedel machine, and now AIXI! Is it genuinely useless in practice, do you think? Hutter says one of his current research priorities is to shrink it down into something that c

Re: [singularity] Uselessness of AIXI

2007-03-06 Thread Ben Goertzel
Matt Mahoney wrote: --- Ben Goertzel <[EMAIL PROTECTED]> wrote: What AIXI does is to continually search through the space of all possible programs, to find the one that in hindsight (based on probabilistic inference with an Occam prior) would have best helped it achieve its goals

Re: [singularity] Uselessness of AIXI

2007-03-07 Thread Ben Goertzel
We will only know for sure whether AIXI theory was useful or not when we can look back 1000 years from now. Shane And of course, if we succeed in creating superhuman AGIs at time T, 1000 human-years of scientific advance will likely occur within a rather brief time-period after time T ;-)

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-07 Thread Ben Goertzel
Richard Loosemore wrote: Eugen Leitl wrote: On Wed, Mar 07, 2007 at 01:24:05PM -0500, Richard Loosemore wrote: For each literary work n in N, use G to generate a universe u, and within that universe, inject a copy of the literary work at a random point in the spacetime of u. Measure the reacti

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel
Sorry Shane, I guess I got carried away with my sense of humor ... No, I don't really think AIXI is useless in a mathematical, theoretical sense. I do think it's a dead-end in terms of providing guidance to pragmatic AGI design, but that's another story I will send a clarifying email to the

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel
Oops, guess that email WAS sent to the list, though I didn't realize it. But no harm done! Ben Goertzel wrote: Sorry Shane, I guess I got carried away with my sense of humor ... No, I don't really think AIXI is useless in a mathematical, theoretical sense. I do think it'

[singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel
AIXI is useless in a mathematical, theoretical sense. I do think it's a dead-end in terms of providing guidance to pragmatic AGI design, but that's another story I will send a clarifying email to the list, I certainly had no serious intention to offend people... Ben Ben Goertzel wrote

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel
Shane Legg wrote: :-) No offence taken, I was just curious to know what your position was. I can certainly understand people with a practical interest not having time for things like AIXI. Indeed as I've said before, my PhD is in AIXI and related stuff, and yet my own AGI project is based on o

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel
Shane Legg wrote: On 3/8/07, *Ben Goertzel* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: using AIXI-type ideas. The problem is that there is nothing, conceptually, in the whole army of ideas surrounding AIXI, that tells you about how to deal with the c

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel
The point I just made cannot be pursued very far, however, because any further discussion of it *requires* that someone on the AIXI side become more specific about why they believe their definition of "intelligent behavior" should be considered coextensive with the common sense use of that

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel
Sorry, but I simply do not accept that you can make "do really well on a long series of IQ tests" into a computable function without getting tangled up in an implicit homuncular trap (i.e. accidentally assuming some "real" intelligence in the computable function). Let me put it this way:

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel
software program scoring 100% on human-created IQ tests. So, the Occam prior embodied in AIXI would almost surely not cause it to take the strategy you suggest. -- Ben Richard Loosemore wrote: Ben Goertzel wrote: Sorry, but I simply do not accept that you can make "do really well on a

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel
Alas, that was not quite the question at issue... In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* to go so far as to simulate most of the functionality of a human brain in order to acquire its ability? I am not asking you to make a judgment call on whether or not it w

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel
AIXI is valueless. Well, I agree that AIXI provides zero useful practical guidance to those of us working on practical AGI systems. However, as I clarified in a prior longer post, saying that mathematics is valueless is always a risky proposition. Statements of this nature have been prov

[singularity] The Shtinkularity

2007-03-11 Thread Ben Goertzel
If you have 2.5 minutes or so to spare, my 13-year-old son Zebulon has made another Singularity-focused mini-movie: http://www.zebradillo.com/AnimPages/The%20Shtinkularity.html This one is not as deep as RoboTurtle II, his 14-minute Singularity-meets-Elvis epic from a year ago or so ... but,

Re: [singularity] The Shtinkularity

2007-03-11 Thread Ben Goertzel
I like Dicksley Chainsworth, too. It's always important for your heroes to have a worthy adversary. PJ What struck me about that character was the uncanny resemblance between Dick Cheney (whose head, obviously, underlies Dicksley Chainsworth) and Steve Martin ... see the resemblance?

Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Ben Goertzel
Why has the singularity and AGI not triggered such an interest? Thiel's donations to SIAI seem like the exception which highlights the rule. Salesmanship? Believability? Fear of Consequences including backlash? I would suspect it is the right people not being approached in the right w

Re: [singularity] The establishment line on AGI

2007-03-19 Thread Ben Goertzel
I don't like to insult US academia too severely, because I feel it's been one of the most productive intellectual establishments in the history of the human race. However, in my 8 years as a professor I did find it frequently frustrating, and one of the many reasons was the narrow-mindedness of

Re: [singularity] The establishment line on AGI

2007-03-19 Thread Ben Goertzel
Shane Legg wrote: On 3/19/07, *Ben Goertzel* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: conservative incremental steps, the current scientific community is highly culturally biased against anyone who wants to make a large leap. Science has drifted int

[singularity] Candide and the Singularity

2007-03-26 Thread Ben Goertzel
My son Zeb read Candide by Voltaire, and was taken by the idea that this is the best of all possible worlds. He has applied this to AGI and the Singularity, in the following passage from a SF story he wrote last week: " Out of the factory, designed with the sole purpose of generating such

[singularity] Japanese gods pray for a positive Singularity

2008-01-19 Thread Ben Goertzel
A frivolous blog post some may find amusing ;-) http://www.goertzel.org/blog/blog.htm ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "We are on the edge of change comparable to the rise of human life on Earth." -- Ve

Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
ist is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "We are on the edge of change compara

Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
_ > This list is sponsored by AGIRI: http://www.agiri.org/email > > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL P

Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
On Jan 20, 2008 1:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Hi Natasha > > After discussions with you and others in 2005, I created a revised > version of the essay, > which may not address all your complaints, but hopefully addressed some of > them. >

Re: [singularity] The Extropian Creed by Ben

2008-01-21 Thread Ben Goertzel
rn > expressed in each essay was/is a desire to see transhumanism work to help > solve the many hardships of humanity – everywhere. > > Thank you Ben. Best wishes, > > Natasha > > > > Natasha Vita-More PhD Candidate, Planetary Collegium - CAiiA, situated in >

[singularity] Multi-Multi-....-Multiverse

2008-01-25 Thread Ben Goertzel
was really refreshing!!!) ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "If men cease to believe that they will one day become gods then they will surely become worms." -- Henry Miller - This list is sponsored by A

Re: [singularity] Wrong focus?

2008-01-26 Thread Ben Goertzel
Hi, > Why does discussion never (unless I've missed something - in which case > apologies) focus on the more realistic future "threats"/possibilities - > future artificial species as opposed to future computer simulations? While I don't agree that AGI is less realistic than artificial biological

Re: [singularity] Wrong focus?

2008-01-26 Thread Ben Goertzel
Mike, > I certainly would like to see discussion of how species generally may be > artificially altered, (including how brains and therefore intelligence may > be altered) - and I'm disappointed, more particularly, that Natasha and any > other transhumanists haven't put forward some half-way reaso

  1   2   >