Re: [singularity] Defining the Singularity

2006-10-24 Thread Richard Loosemore
discussion as possible. Let's hope that that happens on the occasions that it is discussed, now and in the future. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [singularity] Defining the Singularity

2006-10-25 Thread Richard Loosemore
in a 'rational/normative' AI system). My new policy is to discuss issues only with people who can resist the temptation to behave like this. For that reason, Michael, you're now killfiled. If anyone else wants to discuss the issues, feel free. Richard Loosemore. Richard Loosemore wrote

Re: [singularity] Defining the Singularity

2006-10-26 Thread Richard Loosemore
to actually use the stored information which is presumably what a novice AI programmer would do. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [singularity] Defining the Singularity

2006-10-27 Thread Richard Loosemore
a classic example: every single debate or discussion of the consequences of the singularity, it seems, is totally dominated by this kind of sloppy thinking. Richard Loosemore Matt Mahoney wrote: I have raised the possibility that a SAI (including a provably friendly one, if that's

Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
this is a milestone of mutual accord in a hitherto divided community. Progress! Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
Mitchell Porter wrote: Richard Loosemore: In fact, if it knew all about its own design (and it would, eventually), it would check to see just how possible it might be for it to accidentally convince itself to disobey its prime directive, But it doesn't have a prime directive, does

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
that it is just vague handwaving without specific questions designed to show that the argument falls apart under probing. I don't see the argument falling apart, so making that accusation again would be unjustified. Richard Loosemore Ben Goertzel wrote: Hi, There is something about the gist of your

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Richard Loosemore
solving. The fact that this works in practice strongly suggests that the universe is indeed a simulation. It suggests nothing of the sort. Hutter's theory is a mathematical fantasy with no relationship to the real world. Richard Loosemore. - This list is sponsored by AGIRI: http

Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Richard Loosemore
Razor etc.) is irrelevant if you or Hutter cannot prove something more than a hand-waving connection between the mathematical idealizations of intelligence, learning, etc., and the original meanings of those words. So my original request stands unanswered. Richard Loosemore. P.S. The above

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical idealization of these terms correspond with the real thing, ... so

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore
Ben Goertzel wrote: Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical idealization

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore
to perform as well as I do, because it redefines what I am trying to do in such a way as to weaken my performance, and then proves that it can perform better than *that*). Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore
one on call and ready to go when needed). I only need the possibility that it will do this, and my conclusion holds. So: clear question. Does the proof implicitly allow it? Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore
them to deliver one. Such a proof is completely valueless. AIXI is valueless. QED. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=11983

Entropy of the universe [WAS Re: [singularity] Implications of an already existing singularity.]

2007-03-28 Thread Richard Loosemore
visible from *here*. What about the stuff (possibly infinite amounts of stuff) that lies beyond the curvature horizon? Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id

[singularity] Definition of 'Singularity' and 'Mind'

2007-04-17 Thread Richard Loosemore
no sense to ask whether there would be minds so advanced that 'we' could never understand them. Or, to be precise, it is not at all obvious that such a situation will ever exist. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your

Re: [singularity] Re: [tt] [agi] Definition of 'Singularity' and 'Mind'

2007-04-18 Thread Richard Loosemore
The possibility has occurred to me. :-) Colin Tate-Majcher wrote: Heheh, how do you know you didn't want to know what it was like to live in the 2000s and work toward the Singularity. Maybe we are already super advanced and just got bored :) -Colin On 4/18/07, *Richard Loosemore

Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Richard Loosemore
. The full argument is much more detailed, of course, but that is the core of it. Oh, and: Shane is *not* the one who proved the correctness of my assertion! I am not sure where you got that from. ;-) Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-15 Thread Richard Loosemore
, on a machine with only one thousandth of today's power. And besides, solving the problem of understanding sentences could easily be done in principle with even a vocabulary as small as 200 words. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [singularity] Friendly question...

2007-05-27 Thread Richard Loosemore
would not actually work. With a motivational system as bad as that, it would never get to be an AGI in the first place. Hence your assertion that humanity will be wiped out by accident is completely untenable. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [singularity] The humans are dead...

2007-05-29 Thread Richard Loosemore
Keith Elis wrote: Richard Loosemore wrote: Your email could be taken as threatening to set up a website to promote violence against AI researchers who speculate on ideas that, in your judgment, could be considered scary. I'm on your side, too, Richard. I understand this, and I

Re: [singularity] Benefits of being a kook

2007-09-22 Thread Richard Loosemore
mongers in Hollywood would *love* that SIAI-based group to get more publicity, because they'd make money hand over fist if that happened. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [singularity] Benefits of being a kook

2007-09-24 Thread Richard Loosemore
to state your opinion and walk away? Discussion involves the technical details. Anything less is meaningless. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id

Re: [singularity] QUESTION

2007-10-22 Thread Richard Loosemore
refers to what would happen if such machines were built: they would produce a flood of new discoveries on such an immense scale that we would be jumped from our present technology to the technology of the far future in a matter of a few years. Hope that clarifies the situation. Richard

Re: [singularity] QUESTION

2007-10-22 Thread Richard Loosemore
-Original Message- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Monday, October 22, 2007 11:15 AM To: singularity@v2.listbox.com Subject: Re: [singularity] QUESTION albert medina wrote: Dear Sirs, I have a question to ask and I am not sure that I am sending

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
proofs.) Hope that helps, but please ask questions if it does not. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=56650779-cde055

Re: [singularity] CONSCIOUSNESS

2007-10-23 Thread Richard Loosemore
places. I propose to you that Consciousness (encased within the brain) does not know Itself, hence the lively quest and fascination for other intelligence, such as AGI. Sincerely, Albert */Richard Loosemore [EMAIL PROTECTED]/* wrote: [EMAIL PROTECTED] wrote: Hello

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: This is nonsense: the result of giving way to science fiction fantasies instead of thinking through the ACTUAL course of events. If the first one is benign, the scenario below will be impossible, and if the first one

Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Richard Loosemore
the consequence might be more than you were expecting them to be. This is my vision of what a Bright Green Tomorrow could be like. Let me know if you have questions. Richard Loosemore. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
with a Yeah, but what if everything goes wrong, huh? What if Frankenstein turns up? Huh? Huh? comment. Happens every time. Richard Loosemore Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: snip post-singularity utopia Let's assume for the moment that the very first AI

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
on in this discussion. Richard Loosemore Mike Tintner wrote: Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those which try to dress themselves up in some semblance of scientific reasoning. All this speculation, for example, about the friendliness

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
candice schuster wrote: Hi Richard, Without getting too technical on you...how do you propose implementing these ideas of yours ? In what sense? The point is that implementation would be done by the AGIs, after we produce a blueprint for what we want. Richard Loosemore

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
you, THAT would be fantasy. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57169853-e8d26b

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
are fun. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57204066-d80ce4

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
on Nanotechnology by Eric Drexler, or the huge literature on space elevators, or the stuff on life extension. Not fantasy, really. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Richard Loosemore
status: I am sure some people will choose not to take that option, and just stay as they are). Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret

Re: [singularity] John Searle...

2007-10-25 Thread Richard Loosemore
.) Very concisely put: that is exactly the situation. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57704557-682977

Re: [singularity] John Searle...

2007-10-25 Thread Richard Loosemore
. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57724858-1c339c

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-26 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Why do say that Our reign will end in a few decades when, in fact, one of the most obvious things that would happen in this future is that humans will be able to *choose* what intelligence level to be experiencing, on a day

Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore
species. Not master and servant. Just one species with more options than before. [I can see I am going to have to write this out in more detail, just to avoid the confusion caused by brief glimpses of the larger picture]. Richard Loosemore Candice Date: Thu, 25 Oct 2007 19:02:35

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
is: its initial feelings of friendliness toward humanity would have to be the motivation that drove it to find out the CEV. The goal state of its motivation system is assumed in the initial state of its motivation system. Hence: circular. Richard Loosemore - This list is sponsored

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Stefan Pernar wrote: On 10/26/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Stefan can correct me if I am wrong here, but I think that both yourself and Aleksei have misunderstood the sense in which he is pointing to a circularity. If you build

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-26 Thread Richard Loosemore
up to their discoveries. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57872584-e89283

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
these last points, we agree. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604id_secret=57875471-fd31f5

Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: candice schuster wrote: Richard, Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ? I think I recently (last

Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Richard Loosemore
at different levels here, and using these terms in ways that cross over rather weirdly. I speak only of two different types of mechanism, but that does not quite map onto your usage. I will have to think about this some more. Richard Loosemore - This list is sponsored by AGIRI: http

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Suppose that the collective memories of all the humans make up only one billionth of your total memory, like one second of memory out of your human lifetime. Would it make much difference if it was erased

Re: [singularity] MindForth achieves True AI functionality

2008-02-02 Thread Richard Loosemore
pronouncements about Mentifex may be sincere, but his estimates of its capabilities are somewhat ... exaggerated. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Richard Loosemore
of them are fools, and therefore NONE of their counter-arguments are valid. Really. I like Jaron Lanier as a musician, but this is drivel. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Stathis Papaioannou wrote: On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: The first problem arises from Lanier's trick of claiming that there is a computer, in the universe of all possible computers, that has a machine architecture and a machine state that is isomorphic to BOTH

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: When people like Lanier allow themselves the luxury of positing infinitely large computers (who else do we know who does this? Ah, yes, the AIXI folks), they can make infinitely unlikely coincidences happen. It is a commonly

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore
Stathis Papaioannou wrote: On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: [snip] But again, none of this touches upon Lanier's attempt to draw a bogus conclusion from his thought experiment. No external observer would ever be able to keep track of such a fragmented computation

Re: [singularity] Definitions

2008-02-18 Thread Richard Loosemore
for what consciousness is, which starts out from a resolution of the definition-difficulty. I note that Nick Humphrey has recently started to say something very similar. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983

Re: [singularity] Definitions

2008-02-19 Thread Richard Loosemore
is Getting Zapped. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http://www.listbox.com/member/?member_id=4007604id_secret

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore
Stathis Papaioannou wrote: On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: Sorry, but I do not think your conclusion even remotely follows from the premises. But beyond that, the basic reason that this line of argument is nonsensical is that Lanier's thought experiment was rigged

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote: I am aware of some of those other sources for the idea: nevertheless, they are all nonsense for the same reason. I especially single out Searle: his writings on this subject are virtually worthless. I have

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
, the other must also be understanding. (Searle's main folly, of course, is that he has never shown any sign of being able to understand this point). Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed

Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore
floating point numbers because the behavior of the net deteriorated badly if the numerical precision was reduced. This was especially important on long training runs or large datasets. Richard Loosemore --- singularity Archives: http://www.listbox.com

Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- John G. Rose [EMAIL PROTECTED] wrote: Is there really a bit per synapse? Is representing a synapse with a bit an accurate enough simulation? One synapse is a very complicated system. A typical

Re: [singularity] Vista/AGI

2008-03-16 Thread Richard Loosemore
[EMAIL PROTECTED] wrote: You have to be careful with the phrase 'Manhattan-style project'. You are right. On previous occasions when this subject has come up I, at least, have referred to the idea as an Apollo Project, not a Manhattan Project. Richard Loosemore That was a military

Re: [singularity] future search

2008-04-02 Thread Richard Loosemore
... :-) Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4 Powered

Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore
the field in Dead Stop mode. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http://www.listbox.com/member/?member_id

Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore
J. Andrew Rogers wrote: On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote: What could be compelling about a project? (Novamente or any other). Artificial Intelligence is not a field that rests on a firm theoretical basis, because there is no science that says this design should produce

Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore
J. Andrew Rogers wrote: On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote: Artificial Intelligence research does not have a credible science behind it. There is no clear definition of what intelligence is, there is only the living example of the human mind that tells us that some things

Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore
true path to AGI ... I strongly suspect there are many... Actually, the discussion had nothing to do with the rather bizarre interpretation you put on it above. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983

Re: [singularity] Vista/AGI

2008-04-07 Thread Richard Loosemore
, so you cannot demand that the person produce evidence to support the nonexistence claim. The onus is entirely on you to provide evidence that there is a science behind AI, if you believe that there is, not on me to demonstrate that there is none. Richard Loosemore

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
, because the conflict resolution issues are all complexity-governed. I am astonished that you would so blatantly call it something that it is not. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html or don't understand it. Some of us have read it, and it has nothing whatsoever to do with Artificial Intelligence

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
that Google will somehow reach a threshold and (magically) become intelligent. Why would that happen? If they deliberately set out to build an AGI somewhere, and then hook that up to google, that is a different matter entirely. But that is not what is being suggested here. Richard Loosemore

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: Richard Loosemore: I am not sure I understand. There is every reason to think that a currently-envisionable AGI would be millions of times smarter than all of humanity put together. Simply build a human-level AGI, then get it to bootstrap to a level of, say

[singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Just what do you want out of AGI? Something that thinks like a person or something that does what you ask it to? Either will do: your suggestion achieves neither. If I ask your non-AGI the following

Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: When a computer processes a request like how many teaspoons in a cubic parsec? it can extract the meaning of the question by a relatively simple set of syntactic rules and question templates. But when you ask it a question

Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: If you have a better plan for AGI, please let me know. I do. I did already. You are welcome to ask questions about it at any time (see http://susaro.com/publications). Question: which of these papers

About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-10 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: I did also look at http://susaro.com/archives/category/general but there is no design here either, just a list of unfounded assertions. Perhaps you can explain why you believe point #6 in particular

Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore
, but your posts are sounding more and more like incoherent rants. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http

Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore
have already read has not been published! Are there no depths to which you will not stoop? Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983

Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore
get an advanced feeling that such a work is on the way are the people on the front lines, you see all the pieces coming together just before they are assembled for public consumption. Whether or not someone could write down tests of progress ahead of that point, I do not know. Richard

Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore
club for people dedicated to spineless Yudkowsky-worship. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http

Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore
Thomas McCabe wrote: On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote: Such a discussion list would be just another exclusive club for people dedicated to spineless Yudkowsky-worship. Richard Loosemore Eli's not a member of fai-logistics, and I don't think he even knows about it yet

Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore
Thomas McCabe wrote: On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote: Thomas McCabe wrote: On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote: Such a discussion list would be just another exclusive club for people dedicated to spineless Yudkowsky-worship. Richard Loosemore

Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore
Thomas McCabe wrote: On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote: You repeatedly insinuate, in your comments above, that the idea is not taken seriously by anyone, in spite of the fact I have already made it quite clear that this is false. The burden of proof is on you to show

Re: [singularity] New list announcement: fai-logistics

2008-04-26 Thread Richard Loosemore
be necessary to PRESUPPOSE the answer to the question that is driving these considerations about scientific theories. Richard Loosemore Thomas McCabe wrote: On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins [EMAIL PROTECTED] wrote: Thomas McCabe wrote: Does NASA have a coherent

Re: [singularity] Quantum Mechanics and Consciousness

2008-05-21 Thread Richard Loosemore
is, and what its explanation is (you will have to wait for my book to come out before you see why I would be so confident), so if you are anxious that a future AI should have consciousness, I believe this can easily be arranged. Richard Loosemore Bertromavich Edenburg wrote: For Virtual AI

[singularity] Nine Misunderstandings About AI

2008-04-08 Thread Richard Loosemore
I have just written a new blog post that is the begining of a daily series this week and next, when I will be launching a few broadsides against the orthodoxy and explaining where I am going with my work. http://susaro.com/ Richard Loosemore

[singularity] Blog essay on the complex systems problem

2008-04-11 Thread Richard Loosemore
of the ideas I have written about elsewhere. Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http://www.listbox.com/member

[singularity] A more accessible summary of the CSP

2008-04-13 Thread Richard Loosemore
will not be as demading as the last (a few hundred words instead of 4,200). Richard Loosemore --- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http

[singularity] An Open Letter to AGI Investors

2008-04-16 Thread Richard Loosemore
I have stuck my neck out and written an Open Letter to AGI (Artificial General Intelligence) Investors on my website at http://susaro.com. All part of a campaign to get this field jumpstarted. Next week I am going to put up a road map for my own development project. Richard Loosemore