://www.google.com/search?q=+site:sl4.org+crossover+bandwidth
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2
that anyone who hasn't gotten far
enough theoretically to realize this also won't get very far on AGI
implementation.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your
Eliezer S. Yudkowsky wrote:
There may be additional rationalization mechanisms I haven't identified
yet which are needed to explain anosognosia and similar disorders.
Mechanism (4) is the only one deep enough to explain why, for example,
the left hemisphere automatically and unconsciously
position?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
.
This is not a knockdown argument but it is a strong one; only Penrose and
Hameroff have had the courage to face it down openly - postulate, and
search for, both the new physics and the new neurology required.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity
.)
Spirit isn't emergent, and isn't everywhere, and isn't a figment of the
imagination, and isn't supernatural. Spirit refers to a real thing,
with a real explanation; it's just that the explanation is very, very
difficult.
--
Eliezer S. Yudkowsky http://singinst.org
Ben Goertzel wrote:
However, it's to be expected that an AGI's ethics will be different than any
human's ethics, even if closely related.
What do a Goertzelian AGI's ethics and a human's ethics have in common
that makes it a humanly ethical act to construct a Goertzelian AGI?
--
Eliezer S
substantially more thorough definitions in Creating Friendly AI.)
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please
Eliezer S. Yudkowsky wrote:
I recently read through Marcus
Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
formal definition of intelligence, it is not a solution of Friendliness
(nor do I have any reason to believe Marcus Hutter intended it as one).
In fact, as one
.
Actually, Ben, AIXI and AIXI-tl are both formal systems; there is no
internal component in that formal system corresponding to a goal
definition, only an algorithm that humans use to determine when and how
hard they will press the reward button.
--
Eliezer S. Yudkowsky
the behaviors you
want... do you think it does?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2
behaviors, but I think Hutter's already aware that this is
probably AIXI's weakest link.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate
, is that Hutter's systems are
purely concerned with goal-satisfaction, whereas Novamente is not entirely
driven by goal-satisfaction.
Is this reflected in a useful or important behavior of Novamente, in its
intelligence or the way it interacts with humans, that is not possible to
AIXI?
--
Eliezer S
Eliezer S. Yudkowsky wrote:
Not really. There is certainly a significant similarity between Hutter's
stuff and the foundations of Novamente, but there are significant
differences too. To sort out the exact relationship would take me
more than a few minutes' thought.
There are indeed major
-bounded uploaded human or an AIXI-tl, supplies
the uploaded human with a greater reward as the result of strategically
superior actions taken by the uploaded human.
:)
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial
process P which, given either a tl-bounded uploaded
human or an AIXI-tl, supplies the uploaded human with a greater
reward as the result of strategically superior actions taken by the
uploaded human.
:)
-- Eliezer S. Yudkowsky
Hmmm.
Are you saying that given a specific reward function
) AIXI has nothing
to say to you about the pragmatic problem of designing Novamente, nor are
its theorems relevant in building Novamente, etc. But that's exactly the
question I'm asking you. *Do* you believe that Novamente and AIXI rest on
the same foundations?
--
Eliezer S. Yudkowsky
?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
difference
between AIXI and a Friendly AI.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2
posthuman rights that
we understand as little as a dog understands the right to vote.
-- James Hughes
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily
you're too tired to think about can't
hurt you.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2
challenge C on which a tl-bounded human upload outperforms AIXI-tl?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please
.)
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
take a
proposal whose rational extrapolation is to Friendliness and which seems
to lie at a local optimum relative to the improvements I can imagine;
proof is impossible.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial
or my own requires mentally
reproducing more of the abstract properties of AIXI-tl, given its abstract
specification, than your intuitions currently seem to be providing. Do
you have a non-intuitive mental simulation mode?
--
Eliezer S. Yudkowsky http://singinst.org
Bill Hibbard wrote:
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
It *could* do this but it *doesn't* do this. Its control process is such
that it follows an iterative trajectory through chaos which is forbidden
to arrive at a truthful solution, though it may converge to a stable
attractor
Bill Hibbard wrote:
Hey Eliezer, my name is Hibbard, not Hubbard.
*Argh* sound of hand whapping forehead sorry.
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
*takes deep breath*
This is probably the third time you've sent a message
to me over the past few months where you make some
Bill Hibbard wrote:
Strange that there would be someone on this list with a
name so similar to mine.
I apologize, dammit! I whack myself over the head with a ballpeen hammer!
Now let me ask you this: Do you want to trade names?
--
Eliezer S. Yudkowsky http
a top-level
reflective choice that wasn't there before, that (c) was abstracted over
an infinite recursion in your top-level predictive process.
But if this isn't immediately obvious to you, it doesn't seem like a top
priority to try and discuss it...
--
Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote:
But if this isn't immediately obvious to you, it doesn't seem like a top
priority to try and discuss it...
Argh. That came out really, really wrong and I apologize for how it
sounded. I'm not very good at agreeing to disagree.
Must... sleep...
--
Eliezer S
it, but a straight line like that only comes
along once.)
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go
complex PD.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL
to cooperate with it, on the *one* shot PD? AIXI can't take the
action it needs to learn the utility of...
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address
Faster computers make AI easier. They do not make Friendly AI easier in
the least. Once there's enough computing power around that someone could
create AI if they knew exactly what they were doing, Moore's Law is no
longer your friend.
--
Eliezer S. Yudkowsky http
some AGI-recognizable, previously unrecognized, usefully
predictable and reliably exploitable empirical regularities?
Or does any attempt to generate money via AGI require launching at least a
small specialized company to do so?
--
Eliezer S. Yudkowsky http
Ben Goertzel wrote:
But of course, none of us *really know*.
Technically, I believe you mean that you *think* none of us really know,
but you don't *know* that none of us really know. To *know* that none of
us really know, you would have to really know.
--
Eliezer S. Yudkowsky
know for certain, but at the moment, the possibility of guesstimating
within even an order of magnitude seems premature.
See also Human-level software crossover date from the human crossover
metathread on SL4:
http://sl4.org/archive/0104/1057.html
--
Eliezer S. Yudkowsky
Wei Dai wrote:
Eliezer S. Yudkowsky wrote:
Important, because I strongly suspect Hofstadterian superrationality
is a *lot* more ubiquitous among transhumans than among us...
It's my understanding that Hofstadterian superrationality is not generally
accepted within the game theory research
convergence to decision processes that are
correlated with each other with respect to the oneshot PD. If you have
sufficient evidence that the other entity is a superintelligence, that
alone may be sufficient correlation.
--
Eliezer S. Yudkowsky http://singinst.org/
Research
) these miracles are unstable when subjected to further examination
c2) the AI still provides no benefit to humanity even given the miracle
When a branch of an AI extrapolation ends in such a scenario it may
legitimately be labeled a complete failure.
--
Eliezer S. Yudkowsky http
). As far as I'm concerned, physically
implemented morality is physically implemented morality whether it's a
human, an AI, an AI society, or a human society.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
to construct AI societies. What we regard as
beneficial social properties are very contingent on our evolved individual
designs.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your
the
following down sides. At this point a balanced risk/benefit assessment
can be made (not definitive of course since we haven't seen
super-intelligent AGIs operation yet). But at least we've got some
relevant issues on the table to think about.
--
Eliezer S. Yudkowsky http
and into outside-context failures of imagination.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2
that
you do not yet know how to describe in purely physical terms, will fail to
work. That's part of what makes AI hard.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your
that the result violates the Susskind holographic bound for an object
that can be contained in a 1-meter sphere - no more than 10^70 bits of
information.)
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
The Tao is the set of truths that can be stored in zero bits.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please
.
30: Roll twice again on this table, disregarding this result.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please
.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
that are not justified.
Yes, I agree with you there.
An example?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please
what you're
thinking of. You could easily end up having to go down to the molecular
level.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily
#4 and transport the human species into a world based on
Super Mario Bros - a well-specified task for an SI by comparison to most
of the philosophical gibberish I've seen - in which case we would not be
defaulting to self-organization of the free market economy.
--
Eliezer S. Yudkowsky
into causal networks, and I
noticed that this subtopic was interesting enough to perhaps deserve a paper
in its own right. I'm wondering whether anyone on the list has seen such
integration attempted yet, by way of avoiding duplication of effort.
--
Eliezer S. Yudkowsky http
5000 $/year
it will take 5-10 years (starting now)
it will take 1-7 years (someone working on it already)
Imho, its more like the development of H1-H4 sea clocks (John Harrison)
cu Alex
More or less me too.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow
are obvious; and even then, could only offer a high-level
explanation, in terms of work performed by cognition and evolutionary
selection pressures, rather than a neurological stack trace.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity
by considerations that
these margins are too small to include.
I haven't published this, but I believe I mentioned it on AGI during a
discussion of AIXI.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
://www.geocities.com/eganamit/NoCDT.pdf
Here Solomon's Problem is referred to as The Smoking Lesion, but the
formulation is equivalent.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change
Problem.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
random sampling of computing elements, historical
modeling, or even a sufficiently strong prior probability.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address
it is not.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
, you forget how to move, how to talk,
and how to operate your brain.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription
the AI will be Friendly. You
should be able to win in that way if you can win at all, which is the
point of the requirement.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change
has two components, probability theory and
decision theory. If you leave out the decision theory, you can't even
decide which information to gather.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
I think this one was the granddaddy:
http://yudkowsky.net/humor/signs-singularity.txt
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate
speeds and neural speeds,
evolutionary search and intelligent search, are not convertible
quantities; it is like trying to convert temperature to mass, or writing
an equation that says E = MC^3.
See e.g. http://dspace.dial.pipex.com/jcollie/sle/index.htm
--
Eliezer S. Yudkowsky
://whatisthought.com
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
, the cheap^3
reply seems to me valid because it asks what difference of experience we
anticipate.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily
Eliezer S. Yudkowsky wrote:
Eric Baum wrote:
Eliezer Considering the infinitesimal amount of information that
Eliezer evolution can store in the genome per generation, on the
Eliezer order of one bit,
Actually, with sex its theoretically possible to gain something like
sqrt(P) bits per
, and
list moderators who can't bring themselves to say anything so impolite
as Goodbye.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate
into
acoustic vibrations so that you can transmit them to another human who
translates them back into internal quantities.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address
don't think this is mere argument-in-hindsight; it occurred to me long
ago not to trust integer addition, just transistors. And even then,
shielded hardware and reproducible software would not be out of order.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow
As long as we're talking about fantasy applications that require
superhuman AGI, I'd be impressed by a lossy compression of Wikipedia
that decompressed to a non-identical version carrying the same semantic
information.
--
Eliezer S. Yudkowsky http://singinst.org
, and a Python interpreter
that can process it at any finite speed you care to specify.
Now write a program that looks at those endless fields of numbers, and
says how many fingers I'm holding up behind my back.
Looks like you'll have to compress that data first.
--
Eliezer S. Yudkowsky
first five years simply to figure out which
way is up. But Shane, if you restrict yourself to results you can
regularly publish, you couldn't work on what you really wanted to do,
even if you had a million dollars.
--
Eliezer S. Yudkowsky http://singinst.org/
Research
you mean by true
AGI above.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Because the genome is encrypted even worse than natural language.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial
for us to understand than the human
proteome!
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go
bought one for my
last apartment. I see them all over the place. They're really not rare.
Moral: in AI, the state of the art is often advanced far beyond what
people think it is.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute
with the expected utility of simple
uninformative priors, and working up to more structural forms of
uncertainty. Thus, strictly justifying more and more abstract uses of
probabilistic reasoning, as your knowledge about the environment becomes
ever more vague.
--
Eliezer S. Yudkowsky
.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
consistent probabilities. This says nothing about what
kind of mind we would *want* to build, though.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org
the exact sum?
How would you make the demonstration precise enough for an AI to walk
through it, let alone independently discover it?
*Intuitively* the argument is clear enough, I agree.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute
the hypothesis
B raises the subjective probability of P(AB) over that you previously
gave to P(A) - is probably with us to stay, even unto the furthest
stars. It may greatly diminish but not be utterly defeated.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow
.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
one's attachment of probability b to
the statement that: after k more observations have been made, one's
best guess regarding the probability of S will lie in [L,U].
Ben, is the indefinite probability approach compatible with local
propagation in graphical models?
--
Eliezer S. Yudkowsky
Chuck Esterbrook wrote:
On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Heh. Why not work in C++, then, and write your own machine language?
No need to write files to disk, just coerce a pointer to a function
pointer. I'm no Lisp fanatic, but this sounds more like a case
this is obvious. Take a computation
that halts if it finds an even number that is not the sum of two
primes. Append AIXItl. QED.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored
actually become less intelligent?
It has become more powerful and less intelligent, in the same way that
natural selection is very powerful and extremely stupid.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
Why is Murray allowed to remain on this mailing list, anyway? As a
warning to others? The others don't appear to be taking the hint.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list
you require, to put a sentient being
together? That the rest is just an implementation detail? That,
moreover, *any* modern computer scientist knows it?
What can I say, but:
...
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute
Russell Wallace wrote:
On 6/1/07, *Eliezer S. Yudkowsky* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Belldandy preserve us. You think you know everything you need to
know, have every insight you require, to put a sentient being
together? That the rest is just
J Storrs Hall, PhD wrote:
The Age of Virtuous Machines
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0708.html
I am referred to therein as Eliezer Yudkowsk. Hope this doesn't
appear in the book too.
--
Eliezer S. Yudkowsky http://singinst.org
Clues. Plural.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
. But I
never, ever said that, even as a joke, and was saddened but not
surprised to hear it.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
won't be the first to talk about it, either.
So I guess the moral is that I shouldn't toss around the word
absolutely - even when the point needs some heavy moral emphasis -
about events so far in the past.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow
in the sense that the designers had a particular
hard AI subproblem in mind, like natural language.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http
/statistical_bia.html
http://www.overcomingbias.com/2007/04/useful_statisti.html
Inductive bias:
http://www.overcomingbias.com/2007/04/inductive_bias.html
http://www.overcomingbias.com/2007/04/priors_as_mathe.html
Cognitive bias:
http://www.overcomingbias.com/2006/11/whats_a_bias_ag.html
--
Eliezer S
convoluted and difficult to change.
And because we lack the cultural knowledge of a theory of
intelligence. But are probably quite capable of comprehending one.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
1 - 100 of 110 matches
Mail list logo