Re: [agi] Moore's law data - defining HEC

2003-01-06 Thread Eliezer S. Yudkowsky
://www.google.com/search?q=+site:sl4.org+crossover+bandwidth -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2

Re: [agi] Friendliness toward humans

2003-01-09 Thread Eliezer S. Yudkowsky
that anyone who hasn't gotten far enough theoretically to realize this also won't get very far on AGI implementation. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your

Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: There may be additional rationalization mechanisms I haven't identified yet which are needed to explain anosognosia and similar disorders. Mechanism (4) is the only one deep enough to explain why, for example, the left hemisphere automatically and unconsciously

Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
position? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Re: [agi] C-T Thesis (or a version thereof) - Is it useable as anin-principle argument for strong AI?

2003-01-15 Thread Eliezer S. Yudkowsky
. This is not a knockdown argument but it is a strong one; only Penrose and Hameroff have had the courage to face it down openly - postulate, and search for, both the new physics and the new neurology required. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity

Re: [agi] Cosmodelia's posts: Music and Artificial Intelligence;Jane;

2003-02-02 Thread Eliezer S. Yudkowsky
.) Spirit isn't emergent, and isn't everywhere, and isn't a figment of the imagination, and isn't supernatural. Spirit refers to a real thing, with a real explanation; it's just that the explanation is very, very difficult. -- Eliezer S. Yudkowsky http://singinst.org

Re: [agi] AGI morality

2003-02-10 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: However, it's to be expected that an AGI's ethics will be different than any human's ethics, even if closely related. What do a Goertzelian AGI's ethics and a human's ethics have in common that makes it a humanly ethical act to construct a Goertzelian AGI? -- Eliezer S

Re: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Eliezer S. Yudkowsky
substantially more thorough definitions in Creating Friendly AI.) -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: I recently read through Marcus Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a formal definition of intelligence, it is not a solution of Friendliness (nor do I have any reason to believe Marcus Hutter intended it as one). In fact, as one

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
. Actually, Ben, AIXI and AIXI-tl are both formal systems; there is no internal component in that formal system corresponding to a goal definition, only an algorithm that humans use to determine when and how hard they will press the reward button. -- Eliezer S. Yudkowsky

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
the behaviors you want... do you think it does? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
behaviors, but I think Hutter's already aware that this is probably AIXI's weakest link. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
, is that Hutter's systems are purely concerned with goal-satisfaction, whereas Novamente is not entirely driven by goal-satisfaction. Is this reflected in a useful or important behavior of Novamente, in its intelligence or the way it interacts with humans, that is not possible to AIXI? -- Eliezer S

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: Not really. There is certainly a significant similarity between Hutter's stuff and the foundations of Novamente, but there are significant differences too. To sort out the exact relationship would take me more than a few minutes' thought. There are indeed major

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
-bounded uploaded human or an AIXI-tl, supplies the uploaded human with a greater reward as the result of strategically superior actions taken by the uploaded human. :) -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
process P which, given either a tl-bounded uploaded human or an AIXI-tl, supplies the uploaded human with a greater reward as the result of strategically superior actions taken by the uploaded human. :) -- Eliezer S. Yudkowsky Hmmm. Are you saying that given a specific reward function

Re: [agi] unFriendly AIXI

2003-02-11 Thread Eliezer S. Yudkowsky
) AIXI has nothing to say to you about the pragmatic problem of designing Novamente, nor are its theorems relevant in building Novamente, etc. But that's exactly the question I'm asking you. *Do* you believe that Novamente and AIXI rest on the same foundations? -- Eliezer S. Yudkowsky

[agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

[agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
difference between AIXI and a Friendly AI. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Eliezer S. Yudkowsky
posthuman rights that we understand as little as a dog understands the right to vote. -- James Hughes -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
you're too tired to think about can't hurt you. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
challenge C on which a tl-bounded human upload outperforms AIXI-tl? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
.) -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
take a proposal whose rational extrapolation is to Friendliness and which seems to lie at a local optimum relative to the improvements I can imagine; proof is impossible. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
or my own requires mentally reproducing more of the abstract properties of AIXI-tl, given its abstract specification, than your intuitions currently seem to be providing. Do you have a non-intuitive mental simulation mode? -- Eliezer S. Yudkowsky http://singinst.org

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: It *could* do this but it *doesn't* do this. Its control process is such that it follows an iterative trajectory through chaos which is forbidden to arrive at a truthful solution, though it may converge to a stable attractor

Re: [agi] unFriendly Hibbard SIs

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: Hey Eliezer, my name is Hibbard, not Hubbard. *Argh* sound of hand whapping forehead sorry. On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: *takes deep breath* This is probably the third time you've sent a message to me over the past few months where you make some

Re: [agi] who is this Bill Hubbard I keep reading about?

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: Strange that there would be someone on this list with a name so similar to mine. I apologize, dammit! I whack myself over the head with a ballpeen hammer! Now let me ask you this: Do you want to trade names? -- Eliezer S. Yudkowsky http

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
a top-level reflective choice that wasn't there before, that (c) was abstracted over an infinite recursion in your top-level predictive process. But if this isn't immediately obvious to you, it doesn't seem like a top priority to try and discuss it... -- Eliezer S. Yudkowsky

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: But if this isn't immediately obvious to you, it doesn't seem like a top priority to try and discuss it... Argh. That came out really, really wrong and I apologize for how it sounded. I'm not very good at agreeing to disagree. Must... sleep... -- Eliezer S

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
it, but a straight line like that only comes along once.) -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
complex PD. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
to cooperate with it, on the *one* shot PD? AIXI can't take the action it needs to learn the utility of... -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address

Re: [agi] doubling time revisted.

2003-02-17 Thread Eliezer S. Yudkowsky
Faster computers make AI easier. They do not make Friendly AI easier in the least. Once there's enough computing power around that someone could create AI if they knew exactly what they were doing, Moore's Law is no longer your friend. -- Eliezer S. Yudkowsky http

[agi] Low-hanging fruits for true AGIs

2003-02-18 Thread Eliezer S. Yudkowsky
some AGI-recognizable, previously unrecognized, usefully predictable and reliably exploitable empirical regularities? Or does any attempt to generate money via AGI require launching at least a small specialized company to do so? -- Eliezer S. Yudkowsky http

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: But of course, none of us *really know*. Technically, I believe you mean that you *think* none of us really know, but you don't *know* that none of us really know. To *know* that none of us really know, you would have to really know. -- Eliezer S. Yudkowsky

Re: [agi] doubling time watcher.

2003-02-18 Thread Eliezer S. Yudkowsky
know for certain, but at the moment, the possibility of guesstimating within even an order of magnitude seems premature. See also Human-level software crossover date from the human crossover metathread on SL4: http://sl4.org/archive/0104/1057.html -- Eliezer S. Yudkowsky

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
Wei Dai wrote: Eliezer S. Yudkowsky wrote: Important, because I strongly suspect Hofstadterian superrationality is a *lot* more ubiquitous among transhumans than among us... It's my understanding that Hofstadterian superrationality is not generally accepted within the game theory research

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
convergence to decision processes that are correlated with each other with respect to the oneshot PD. If you have sufficient evidence that the other entity is a superintelligence, that alone may be sufficient correlation. -- Eliezer S. Yudkowsky http://singinst.org/ Research

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
) these miracles are unstable when subjected to further examination c2) the AI still provides no benefit to humanity even given the miracle When a branch of an AI extrapolation ends in such a scenario it may legitimately be labeled a complete failure. -- Eliezer S. Yudkowsky http

Re: [agi] Hard Wired Switch

2003-03-03 Thread Eliezer S. Yudkowsky
). As far as I'm concerned, physically implemented morality is physically implemented morality whether it's a human, an AI, an AI society, or a human society. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

Re: [agi] Why is multiple superintelligent AGI's safer than a singleAGI?

2003-03-03 Thread Eliezer S. Yudkowsky
to construct AI societies. What we regard as beneficial social properties are very contingent on our evolved individual designs. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your

[agi] Singletons and multiplicities

2003-03-03 Thread Eliezer S. Yudkowsky
the following down sides. At this point a balanced risk/benefit assessment can be made (not definitive of course since we haven't seen super-intelligent AGIs operation yet). But at least we've got some relevant issues on the table to think about. -- Eliezer S. Yudkowsky http

Re: [agi] Why is multiple superintelligent AGI's safer than a singleAGI?

2003-03-03 Thread Eliezer S. Yudkowsky
and into outside-context failures of imagination. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2

Re: [agi] Web Consciousness and self consciousness

2003-09-09 Thread Eliezer S. Yudkowsky
that you do not yet know how to describe in purely physical terms, will fail to work. That's part of what makes AI hard. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your

Re: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Eliezer S. Yudkowsky
that the result violates the Susskind holographic bound for an object that can be contained in a 1-meter sphere - no more than 10^70 bits of information.) -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

Re: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Eliezer S. Yudkowsky
The Tao is the set of truths that can be stored in zero bits. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please

[agi] HUMOR: Friendly AI Critical Failure Table

2003-10-02 Thread Eliezer S. Yudkowsky
. 30: Roll twice again on this table, disregarding this result. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please

Re: [agi] HUMOR: Friendly AI Critical Failure Table

2003-10-03 Thread Eliezer S. Yudkowsky
. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Bayes rule in the brain

2004-02-01 Thread Eliezer S. Yudkowsky
that are not justified. Yes, I agree with you there. An example? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please

Re: [agi] Within-cell computation in biological neural systems??

2004-02-07 Thread Eliezer S. Yudkowsky
what you're thinking of. You could easily end up having to go down to the molecular level. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily

Re: [agi] Within-cell computation in biological neural systems??

2004-02-08 Thread Eliezer S. Yudkowsky
#4 and transport the human species into a world based on Super Mario Bros - a well-specified task for an SI by comparison to most of the philosophical gibberish I've seen - in which case we would not be defaulting to self-organization of the free market economy. -- Eliezer S. Yudkowsky

[agi] Integrating uncertainty about computation into Bayesian causal networks?

2005-04-28 Thread Eliezer S. Yudkowsky
into causal networks, and I noticed that this subtopic was interesting enough to perhaps deserve a paper in its own right. I'm wondering whether anyone on the list has seen such integration attempted yet, by way of avoiding duplication of effort. -- Eliezer S. Yudkowsky http

Re: [agi] estimated cost of Seed AI

2005-06-12 Thread Eliezer S. Yudkowsky
5000 $/year it will take 5-10 years (starting now) it will take 1-7 years (someone working on it already) Imho, its more like the development of H1-H4 sea clocks (John Harrison) cu Alex More or less me too. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow

Re: [agi] intuition [was: ... [was: ...]]

2006-05-09 Thread Eliezer S. Yudkowsky
are obvious; and even then, could only offer a high-level explanation, in terms of work performed by cognition and evolutionary selection pressures, rather than a neurological stack trace. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity

[agi] Re: Superrationality

2006-05-24 Thread Eliezer S. Yudkowsky
by considerations that these margins are too small to include. I haven't published this, but I believe I mentioned it on AGI during a discussion of AIXI. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

[agi] Re: Superrationality

2006-05-25 Thread Eliezer S. Yudkowsky
://www.geocities.com/eganamit/NoCDT.pdf Here Solomon's Problem is referred to as The Smoking Lesion, but the formulation is equivalent. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change

Re: [agi] Re: Superrationality

2006-05-25 Thread Eliezer S. Yudkowsky
Problem. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Re: Superrationality

2006-05-26 Thread Eliezer S. Yudkowsky
random sampling of computing elements, historical modeling, or even a sufficiently strong prior probability. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address

[agi] Re: Superrationality

2006-05-26 Thread Eliezer S. Yudkowsky
it is not. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] AGIRI Summit

2006-05-31 Thread Eliezer S. Yudkowsky
, you forget how to move, how to talk, and how to operate your brain. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription

Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-06 Thread Eliezer S. Yudkowsky
the AI will be Friendly. You should be able to win in that way if you can win at all, which is the point of the requirement. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-16 Thread Eliezer S. Yudkowsky
has two components, probability theory and decision theory. If you leave out the decision theory, you can't even decide which information to gather. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

Re: [agi] singularity humor

2006-07-13 Thread Eliezer S. Yudkowsky
I think this one was the granddaddy: http://yudkowsky.net/humor/signs-singularity.txt -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate

Re: [agi] Processing speed for core intelligence in human brain

2006-07-14 Thread Eliezer S. Yudkowsky
speeds and neural speeds, evolutionary search and intelligent search, are not convertible quantities; it is like trying to convert temperature to mass, or writing an equation that says E = MC^3. See e.g. http://dspace.dial.pipex.com/jcollie/sle/index.htm -- Eliezer S. Yudkowsky

[agi] Re: strong and weakly self improving processes

2006-07-14 Thread Eliezer S. Yudkowsky
://whatisthought.com -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Re: strong and weakly self improving processes

2006-07-15 Thread Eliezer S. Yudkowsky
, the cheap^3 reply seems to me valid because it asks what difference of experience we anticipate. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily

[agi] Re: strong and weakly self improving processes

2006-07-15 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: Eric Baum wrote: Eliezer Considering the infinitesimal amount of information that Eliezer evolution can store in the genome per generation, on the Eliezer order of one bit, Actually, with sex its theoretically possible to gain something like sqrt(P) bits per

Re: [agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-26 Thread Eliezer S. Yudkowsky
, and list moderators who can't bring themselves to say anything so impolite as Goodbye. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate

Re: [agi] fuzzy logic necessary?

2006-08-03 Thread Eliezer S. Yudkowsky
into acoustic vibrations so that you can transmit them to another human who translates them back into internal quantities. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address

[agi] Re: On proofs of correctness

2006-08-05 Thread Eliezer S. Yudkowsky
don't think this is mere argument-in-hindsight; it occurred to me long ago not to trust integer addition, just transistors. And even then, shielded hardware and reproducible software would not be out of order. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Eliezer S. Yudkowsky
As long as we're talking about fantasy applications that require superhuman AGI, I'd be impressed by a lossy compression of Wikipedia that decompressed to a non-identical version carrying the same semantic information. -- Eliezer S. Yudkowsky http://singinst.org

Re: [agi] Lossy ** lossless compression

2006-08-25 Thread Eliezer S. Yudkowsky
, and a Python interpreter that can process it at any finite speed you care to specify. Now write a program that looks at those endless fields of numbers, and says how many fingers I'm holding up behind my back. Looks like you'll have to compress that data first. -- Eliezer S. Yudkowsky

Re: [agi] Why so few AGI projects?

2006-09-14 Thread Eliezer S. Yudkowsky
first five years simply to figure out which way is up. But Shane, if you restrict yourself to results you can regularly publish, you couldn't work on what you really wanted to do, even if you had a million dollars. -- Eliezer S. Yudkowsky http://singinst.org/ Research

Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Eliezer S. Yudkowsky
you mean by true AGI above. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2

Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky
Eric Baum wrote: (Why should producing a human-level AI be cheaper than decoding the genome?) Because the genome is encrypted even worse than natural language. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial

Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky
for us to understand than the human proteome! -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go

Re: [agi] SOTA

2007-01-12 Thread Eliezer S. Yudkowsky
bought one for my last apartment. I see them all over the place. They're really not rare. Moral: in AI, the state of the art is often advanced far beyond what people think it is. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute

[agi] Optimality of using probability

2007-02-02 Thread Eliezer S. Yudkowsky
with the expected utility of simple uninformative priors, and working up to more structural forms of uncertainty. Thus, strictly justifying more and more abstract uses of probabilistic reasoning, as your knowledge about the environment becomes ever more vague. -- Eliezer S. Yudkowsky

Re: [agi] Betting and multiple-component truth values

2007-02-05 Thread Eliezer S. Yudkowsky
. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Betting and multiple-component truth values

2007-02-05 Thread Eliezer S. Yudkowsky
consistent probabilities. This says nothing about what kind of mind we would *want* to build, though. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org

[agi] Re: Optimality of using probability

2007-02-05 Thread Eliezer S. Yudkowsky
the exact sum? How would you make the demonstration precise enough for an AI to walk through it, let alone independently discover it? *Intuitively* the argument is clear enough, I agree. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute

Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread Eliezer S. Yudkowsky
the hypothesis B raises the subjective probability of P(AB) over that you previously gave to P(A) - is probably with us to stay, even unto the furthest stars. It may greatly diminish but not be utterly defeated. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow

Re: [agi] Betting and multiple-component truth values

2007-02-09 Thread Eliezer S. Yudkowsky
. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Priors and indefinite probabilities

2007-02-11 Thread Eliezer S. Yudkowsky
one's attachment of probability b to the statement that: after k more observations have been made, one's best guess regarding the probability of S will lie in [L,U]. Ben, is the indefinite probability approach compatible with local propagation in graphical models? -- Eliezer S. Yudkowsky

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eliezer S. Yudkowsky
Chuck Esterbrook wrote: On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Heh. Why not work in C++, then, and write your own machine language? No need to write files to disk, just coerce a pointer to a function pointer. I'm no Lisp fanatic, but this sounds more like a case

[agi] Gödel's theorem for intelligence

2007-05-15 Thread Eliezer S. Yudkowsky
this is obvious. Take a computation that halts if it finds an even number that is not the sum of two primes. Append AIXItl. QED. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored

Re: [agi] definitions of intelligence, again?!

2007-05-16 Thread Eliezer S. Yudkowsky
actually become less intelligent? It has become more powerful and less intelligent, in the same way that natural selection is very powerful and extremely stupid. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Eliezer S. Yudkowsky
Why is Murray allowed to remain on this mailing list, anyway? As a warning to others? The others don't appear to be taking the hint. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list

Re: [agi] Opensource Business Model

2007-05-31 Thread Eliezer S. Yudkowsky
you require, to put a sentient being together? That the rest is just an implementation detail? That, moreover, *any* modern computer scientist knows it? What can I say, but: ... -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute

Re: [agi] Opensource Business Model

2007-06-01 Thread Eliezer S. Yudkowsky
Russell Wallace wrote: On 6/1/07, *Eliezer S. Yudkowsky* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Belldandy preserve us. You think you know everything you need to know, have every insight you require, to put a sentient being together? That the rest is just

Re: [agi] Beyond AI chapters up on Kurzweil

2007-06-01 Thread Eliezer S. Yudkowsky
J Storrs Hall, PhD wrote: The Age of Virtuous Machines http://www.kurzweilai.net/meme/frame.html?main=/articles/art0708.html I am referred to therein as Eliezer Yudkowsk. Hope this doesn't appear in the book too. -- Eliezer S. Yudkowsky http://singinst.org

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread Eliezer S. Yudkowsky
Clues. Plural. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Eliezer S. Yudkowsky
. But I never, ever said that, even as a joke, and was saddened but not surprised to hear it. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Eliezer S. Yudkowsky
won't be the first to talk about it, either. So I guess the moral is that I shouldn't toss around the word absolutely - even when the point needs some heavy moral emphasis - about events so far in the past. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow

Re: [agi] AGI introduction

2007-06-24 Thread Eliezer S. Yudkowsky
in the sense that the designers had a particular hard AI subproblem in mind, like natural language. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http

Re: [agi] What's wrong with being biased?

2007-06-29 Thread Eliezer S. Yudkowsky
/statistical_bia.html http://www.overcomingbias.com/2007/04/useful_statisti.html Inductive bias: http://www.overcomingbias.com/2007/04/inductive_bias.html http://www.overcomingbias.com/2007/04/priors_as_mathe.html Cognitive bias: http://www.overcomingbias.com/2006/11/whats_a_bias_ag.html -- Eliezer S

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Eliezer S. Yudkowsky
convoluted and difficult to change. And because we lack the cultural knowledge of a theory of intelligence. But are probably quite capable of comprehending one. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Eliezer S. Yudkowsky
. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id

  1   2   >