Let X_i, i=1,...,n, denote a set of discrete
random variables
X_i is the set of all integers
between i and n, initial value for i is 1?
or is i any member of the set
X?
or does i function only as a lower
bound to set X?
hi me again. if forgot to ask: is
Philip,
The discussion at times seems to have progressed on the basis that
AIXI / AIXItl could choose to do all sorts amzing, powerful things. But
what I'm uncear on is what generates the infinite space of computer
programs?
Does AIXI / AIXItl itself generate these programs? Or does it
Ben Goertzel wrote:
Agreed, except for the very modest resources part. AIXI could
potentially accumulate pretty significant resources pretty quickly.
Agreed. But if the AIXI needs to dissassemble the planet to build its
defense mechanism, the fact that it is harmless afterwards isn't going to
To avoid the problem entirely, you have to figure out how to make
an AI that
doesn't want to tinker with its reward system in the first place. This, in
turn, requires some tricky design work that would not necessarily seem
important unless one were aware of this problem. Which, of course,
http://www.optimal.org/peter/siai_guidelines.htm
Peter
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Ben Goertzel
I would recommend Eliezer's excellent writings on this topic if you don't
know them, chiefly www.singinst.org/CFAI.html . Also, I
In fact Physics are not random. But let's go a little further,
and here's what I want to say.
Physics are deterministic. Deterministic means that given a
system in one state, the following state can be inferred by
applying physics rules. It also works backwards: a given state
has only one
Thursday, February 20, 2003, 10:58:57 AM, Ben Goertzel wrote:
BG OK... I can see that I formulated the problem too formally for a lot of
BG people
BG I will now rephrase it in the context of a specific test problem.
snip
BG I don't know if this test problem will clarify things or confuse them
Ben Goertzel wrote:
I don't think that preventing an AI from tinkering with its
reward system is the only solution, or even the best one...
It will in many cases be appropriate for an AI to tinker with its goal
system...
I don't think I was being clear there. I don't mean the AI should be
BG I don't know if this test problem will clarify things or
confuse them ;-)
For me, it's confused them. I thought I was following it before,
sorta...
OK, well I'm pressed for time today, so I'll write a nonmathematical version
of the problem late tonight or tomorrow or over the weekend.
Ben Goertzel wrote:
I don't think that preventing an AI from tinkering with its
reward system is the only solution, or even the best one...
It will in many cases be appropriate for an AI to tinker with its goal
system...
I don't think I was being clear there. I don't mean the AI
Thanks to Peter for starting this discussion and to Ben for following up. This seems
to me a more constructive way to talk about Friendly AI.
Now it's my turn to comment on the 8 Guidelines, according to my NARS design (for
people who have no idea what I'm talking about, see
Yes,
of course, the overlaps are the whole subtlety to the problem! This is
what's known as "probabilistic dependency" ;-)
-Original Message-From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of KevinSent:
Thursday, February 20, 2003 2:43 PMTo:
[EMAIL
If P1 and P2 are contradictory, compare the truth values of the
assertions. If they are very similar, do nothing, because it's
impossible to know which is correct. If they vary
significantly(and at least one of them is above a certain
threshold), alter the probabilities towards one
But anyway, using the weighted-averaging rule dynamically and iteratively
can lead to problems in some cases. Maybe the mechanism you suggest -- a
nonlinear average of some sort -- would have better behavior, I'll think
about it.
The part of the idea that guaranteed an eventual equilibrium
Thursday, February 20, 2003, 2:25:54 PM, Ben Goertzel wrote:
BG The basic situation can be thought of as follows.
snip
Thanks, this does clarify things a lot. Your first statement of the
problem did leave some things out though...but, perhaps
unsurprisingly, I'm still a bit puzzled.
I don't
I was thinking about the so-called paralellism of the brain which is a
poorly fitting metaphore at best... To explain the high resiliency of
neural circuits to minor variations in structure the term redundant
seems more appropriate...
The computers we build can be viewed, in this context as Many
Interestingly, in our system, we nearly always get an equilibrium even
without any kind of rate of change decay factor. It's just that if too
much conclusion based premise revision goes on, then the equilibrium may
reflect a too-much-revised illusory world. Basically, the process of
revising
Hi Cliff,
BG One thing that complicates the problem is that ,in some
cases, as well as
BG inferring probabilities one hasn't been given, one may want to make
BG corrections to probabilities one HAS been given. For
instance, sometimes
BG one may be given inconsistent information, and one
Thursday, February 20, 2003, 8:11:48 PM, Ben Goertzel wrote:
CS Somehow I see this ending up as finding a set a bell curves (i.e.
CS their height, spread and optimum) for each estimate. That is to say I
CS don't see *just* the probability as relevant but the probability
CS distribution...if I
Isn't there some way, if a full curve is too computationally
exensive, some way of expressing, say, 2 sigmas (standard deviations)
or whatever? E.g. 74% will fall within 1 standard dev. of optimum X?
We tried that, but generally, after a few inference iterations, the
confidence intervals
On Wed, Feb 19, 2003 at 06:37:21PM -0500, Eliezer S. Yudkowsky wrote:
Similarity in this case may be (formally) emergent, in the sense that a
most or all plausible initial conditions for a bootstrapping
superintelligence - even extremely exotic conditions like the birth of a
Friendly AI -
Hi Cliff and others,
As I came up with this kind of a test perhaps I should
say a few things about its motivation...
The problem was that the Webmind system had a number of
proposed reasoning systems and it wasn't clear which was
the best. Essentially the reasoning systems took as input
a
22 matches
Mail list logo