Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Bill Hibbard
Eliezer S. Yudkowsky wrote:
 Bill Hibbard wrote:
  On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
 
 It *could* do this but it *doesn't* do this.  Its control process is such
 that it follows an iterative trajectory through chaos which is forbidden
 to arrive at a truthful solution, though it may converge to a stable
 attractor.
 
  This is the heart of the fallacy. Neither a human nor an AIXI
  can know that his synchronized other self - whichever one
  he is - is doing the same. All a human or an AIXI can know is
  its observations. They can estimate but not know the intentions
  of other minds.

 The halting problem establishes that you can never perfectly understand
 your own decision process well enough to predict its decision in advance,
 because you'd have to take into account the decision process including the
 prediction, et cetera, establishing an infinite regress.

 However, Corbin doesn't need to know absolutely that his other self is
 synchronized, nor does he need to know his other self's decision in
 advance.  Corbin only needs to establish a probabilistic estimate, good
 enough to guide his actions, that his other self's decision is correlated
 with his *after* the fact.  (I.e., it's not a halting problem where you
 need to predict yourself in advance; you only need to know your own
 decision after the fact.)

 AIXI-tl is incapable of doing this for complex cooperative problems
 because its decision process only models tl-bounded things and AIXI-tl is
 not *remotely close* to being tl-bounded.

Now you are using a different argument. You previous argument was:

 Lee Corbin can work out his entire policy in step (2), before step
 (3) occurs, knowing that his synchronized other self - whichever one
 he is - is doing the same.

Now you have Corbin merely estimating his clone's intentions.
While it is true that AIXI-tl cannot completely simulate itself,
it also can estimate another AIXI-tl's future behavior based on
observed behavior.

Your argument is now that Corbin can do it better. I don't
know if this is true or not.

 . . .
 Let's say that AIXI-tl takes action A in round 1, action B in round 2, and
 action C in round 3, and so on up to action Z in round 26.  There's no
 obvious reason for the sequence {A...Z} to be predictable *even
 approximately* by any of the tl-bounded processes AIXI-tl uses for
 prediction.  Any given action is the result of a tl-bounded policy but the
 *sequence* of *different* tl-bounded policies was chosen by a t2^l process.

Your example sequence is pretty simple and should match a
nice simple universal turing machine program in an AIXI-tl,
well within its bounds. Furthermore, two AIXI-tl's will
probably converge on a simple sequence in prisoner's
dilemma. But I have no idea if they can do it better than
Corbin and his clone.

Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
Eliezer S. Yudkowsky wrote:
  . . .
  Yes. Laws (logical constraints) are inevitably ambiguous.

 Does that include the logical constraints governing the reinforcement
 process itself?

There is a logic of the reinforcement process, but it is a
behavior rather than a constraint on a behavior.

By ambiguity, I mean for example the ambiguity of Asimov's
first law that says A robot may not injure a human being,
or, through inaction, allow a human being to come to harm.
This is ambiguous in the situation where one human is harming
another, and the law requires the robot to intervene for the
victim but prohibits it from intervening against the attacker.

 Like, the complexity of everything the SI needs to do is some very high
 quantity, while the complexity of the principles that are supposed to
 entail it is small, right?
 
  As wonderfully demonstrated by Eric Baum's papers, complex
  behaviors are learned via simple values.

 *Some* complex behaviors can be learned via *some* simple values.  The
 question is understanding *which* simple values result in the learning of
 which complex behaviors; for example, Eric Baum's system had to be created
 with simple values that behave in a very precise way in order to achieve
 its current level of learning ability.  That's why Eric Baum had to write
 the paper, instead of just saying Aha, I can produce complex behaviors
 via simple values.

Baum's algorithm is very carefully worked out, but the
reinforcement values it learns from are simple. And a
successful reinforcement learning algorithm is one that
can work from any reinforcement values in any situation.

 . . .
 Yes, reinforcement learning generates very complex behaviors from there.
 The question is *which* complex behaviors - whether you see all the
 complex behaviors you want to see, and none of the complex behaviors you
 don't want to see.

 Can I take it from the above that you believe that AI morality can be
 created by reinforcing behaviors using a predicate P that acts on incoming
 video sensory information to recognize smiles and laughter and generate a
 reward signal?  Is this adequate for a superintelligence too?

The key for intelligence is a good reinforcement learning
algorithm, that can work from any reinforcement values and
efficiently learn behaviors that maximize those values.

So the values can be simple, like recognizing smiles and
laughter, and the learned behaviors can be complex, even
to the point of making billions of humans happy. That
qualifies as super-intelligent.

Despite the wonderful work of Eric Baum and others,
developing really robust reinforcement learning is a
really hard challenge. Which is why my estimate for
the arrival of SI is 2100 rather than 2010 or 2020.
I hope I'm wrong, because I want to meet a SI.

Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Philip Sutton
Eliezer/Ben,

When you've had time to draw breath can you explain, in non-obscure, 
non-mathematical language, what the implications of the AIXI-tl 
discussion are?

Thanks.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Breaking AIXI-tl

2003-02-15 Thread Ben Goertzel

Hi,

 There's a physical challenge which operates on *one* AIXI-tl and breaks
 it, even though it involves diagonalizing the AIXI-tl as part of the
 challenge.

OK, I see what you mean by calling it a physical challenge.  You mean
that, as part of the challenge, the external agent posing the challenge is
allowed to clone the AIXI-tl.

   An intuitively fair, physically realizable challenge, with important
   real-world analogues, formalizable as a computation which can be fed
   either a tl-bounded uploaded human or an AIXI-tl, for which the human
   enjoys greater success measured strictly by total reward over time, due
   to the superior strategy employed by that human as the result of
   rational reasoning of a type not accessible to AIXI-tl.

 It's really the formalizability of the challenge as a computation which
 can be fed either a *single* AIXI-tl or a *single* tl-bounded uploaded
 human that makes the whole thing interesting at all... I'm sorry I didn't
 succeed in making clear the general class of real-world analogues for
 which this is a special case.

OK  I don't see how the challenge you've described is
formalizable as a computation which can be fed either a tl-bounded uploaded
human or an AIXI-tl.

The challenge involves cloning the agent being challenged.  Thus it is not a
computation feedable to the agent, unless you assume the agent is supplied
with a cloning machine...

 If I were to take a very rough stab at it, it would be that the
 cooperation case with your own clone is an extreme case of many scenarios
 where superintelligences can cooperate with each other on the one-shot
 Prisoner's Dilemna provided they have *loosely similar* reflective goal
 systems and that they can probabilistically estimate that enough loose
 similarity exists.

Yah, but the definition of a superintelligence is relative to the agent
being challenged.

For any fixed superintelligent agent A, there are AIXItl's big enough to
succeed against it in any cooperative game.

To break AIXI-tl, the challenge needs to be posed in a way that refers to
AIXItl's own size, i.e. one has to say something like Playing a cooperative
game with other intelligences of intelligence at least f(t,l)  where if is
some increasing function

If the intelligence of the opponents is fixed, then one can always make an
AIXItl win by increasing t and l ...

So your challenges are all of the form:

* For any fixed AIXItl, here is a challenge that will defeat it

ForAll AIXItl's A(t,l), ThereExists a challenge C(t,l) so that fails_at(A,C)

or alternatively

ForAll AIXItl's A(t,l), ThereExists a challenge C(A(t,l)) so that
fails_at(A,C)

rather than of the form

* Here is a challenge that will defeat any AIXItl

ThereExists a challenge C so that ForAll AIXItl's A(t,l), fails_at(A,C)

The point is that the challenge C is a function C(t,l) rather than being
independent of t and l

This of course is why your challenge doesn't break Hutter's theorem.  But
it's a distinction that your initial verbal formulation didn't make very
clearly (and I understand, the distinction is not that easy to make in
words.)

Of course, it's also true that

ForAll uploaded humans H, ThereExists a challenge C(H) so that fails_at(H,C)

What you've shown that's interesting is that

ThereExists a challenge C, so that:
-- ForAll AIXItl's A(t,l), fails_at(A,C(A))
-- for many uploaded humans H, succeeds_at(H,C(H))

(Where, were one to try to actually prove this, one would substitute
uploaded humans with other AI programs or something).



  The interesting part is that these little
 natural breakages in the formalism create an inability to take part in
 what I think might be a fundamental SI social idiom, conducting binding
 negotiations by convergence to goal processes that are guaranteed to have
 a correlated output, which relies on (a) Bayesian-inferred initial
 similarity between goal systems, and (b) the ability to create a
 top-level
 reflective choice that wasn't there before, that (c) was abstracted over
 an infinite recursion in your top-level predictive process.

I think part of what you're saying here is that AIXItl's are not designed to
be able to participate in a community of equals  This is certainly true.

--- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Ben Goertzel

Hi,

 Baum's algorithm is very carefully worked out, but the
 reinforcement values it learns from are simple. And a
 successful reinforcement learning algorithm is one that
 can work from any reinforcement values in any situation.

Yeah, I don't think that Baum's algorithm need special reinforcement values.

However, it is VERY slow and seems to require very careful parameter tuning.

As Moshe pointed out to me, Marcus Hutter and his students tried to
replicate Baum's work, with mixed results:

go to

http://www.idsia.ch/~marcus/

click on Artificial Intelligence and scroll down to

Market-Based Reinforcement Learning in Partially Observable Worlds (with I.
Kwee  J. Schmidhuber)
Proceedings of the 11th International Conference on Artificial Neural
Networks (ICANN-2001) 865-873


 Despite the wonderful work of Eric Baum and others,
 developing really robust reinforcement learning is a
 really hard challenge. Which is why my estimate for
 the arrival of SI is 2100 rather than 2010 or 2020.
 I hope I'm wrong, because I want to meet a SI.


Bill, I think that *thinking about* the AGI problem as a problem of
developing really robust reinforcement learning is CORRECT but
UNPRODUCTIVE.  I think that if you think about the problem as one of
creating an integrated mind-system, and build the integrated mind-system,
you will find that the robust reinforcement learning comes along due to
coordinated emergent behaviors of various components.

So ultimately, I don't think that ultra-clever pure-reinforcement-learning
schemes like Baum's are the road to AGI, although they may play a role.

It wouldn't be the first time in the history of science that a problem
looked close-to-impossible from one perspective, but became manageable via a
perspective-shift.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Breaking AIXI-tl

2003-02-15 Thread Ben Goertzel

hi,

 No, the challenge can be posed in a way that refers to an arbitrary agent
 A which a constant challenge C accepts as input.

But the problem with saying it this way, is that the constant challenge
has to have an infinite memory capacity.

So in a sense, it's an infinite constant ;)

 No, the charm of the physical challenge is exactly that there exists a
 physically constant cavern which defeats any AIXI-tl that walks into it,
 while being tractable for wandering tl-Corbins.

No, this isn't quite right.

If the cavern is physically constant, then there must be an upper limit to
the t and l for which it can clone AIXItl's.

If the cavern has N bits (assuming a bitistic reduction of physics, for
simplicity ;), then it can't clone an AIXItl where t 2^N, can it?  Not
without grabbing bits (particles or whatever) from the outside universe to
carry out the cloning.  (and how could the AIXItl with t2^N even fit
inside it??)

You still need the quantifiers reversed: for any AIXI-tl, there is a cavern
posing a challenge that defeats it...

  I think part of what you're saying here is that AIXItl's are
 not designed to
  be able to participate in a community of equals  This is
 certainly true.

 Well, yes, as a special case of AIXI-tl's being unable to carry out
 reasoning where their internal processes are correlated with the
 environment.

Agreed...

(See, it IS actually possible to convince me of something, when it's
correct; I'm actually not *hopelessly* stubborn ;)

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] who is this Bill Hubbard I keep reading about?

2003-02-15 Thread Bill Hibbard
Ed,

I agree that it was very decent of Philip to admit
to starting the mis-spelling of my name. My general
complaint abou the mis-spelling was sent hours before
I even read Eliezer's message, but due to the vagaries
of email was delivered hours after my reply to Eliezer,
giving the impression that I was fuming about it. Not
so, I just wanted to let folks know it was an error.

 Bill, I've looked at some of your work...it's both plentiful and powerful!
 Personally, I like your Götterdämmerung text and the VisAD applied stuff.
 It seems you're the lead architect on VisAD.  Is the system presently used
 for any gaming applications, i.e. game (3D) development platforms, e.g.
 OpenGL?

Its not used for gaming. Its strength is that it integrates
metadata such as units, coordinate systems, sampling topologies
and geometries, missing data indicators, error estimates into
a data model that express pretty much any numerical data. Also,
it supports interfaces to a variety of scientific file and
server formats, and earth navigation for a variety of earth
satellites. This probably doesn't help games much, but is good
for science. It does use OpenGL via Java3D.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:

hi,


No, the challenge can be posed in a way that refers to an arbitrary agent
A which a constant challenge C accepts as input.


But the problem with saying it this way, is that the constant challenge
has to have an infinite memory capacity.

So in a sense, it's an infinite constant ;)


Infinite Turing tapes are a pretty routine assumption in operations like 
these.  I think Hutter's AIXI-tl is supposed to be able to handle constant 
environments (as opposed to constant challenges, a significant formal 
difference) that contain infinite Turing tapes.  Though maybe that'd 
violate separability?  Come to think of it, the Clone challenge might 
violate separability as well, since AIXI-tl (and hence its Clone) builds 
up state.

No, the charm of the physical challenge is exactly that there exists a
physically constant cavern which defeats any AIXI-tl that walks into it,
while being tractable for wandering tl-Corbins.


No, this isn't quite right.

If the cavern is physically constant, then there must be an upper limit to
the t and l for which it can clone AIXItl's.


Hm, this doesn't strike me as a fair qualifier.  One, if an AIXItl exists 
in the physical universe at all, there are probably infinitely powerful 
processors lying around like sunflower seeds.  And two, if you apply this 
same principle to any other physically realized challenge, it means that 
people could start saying Oh, well, AIXItl can't handle *this* challenge 
because there's an upper bound on how much computing power you're allowed 
to use.  If Hutter's theorem is allowed to assume infinite computing 
power inside the Cartesian theatre, then the magician's castle should be 
allowed to assume infinite computing power outside the Cartesian theatre. 
 Anyway, a constant cave with an infinite tape seems like a constant 
challenge to me, and a finite cave that breaks any {AIXI-tl, tl-human} 
contest up to l=googlebyte also still seems interesting, especially as 
AIXI-tl is supposed to work for any tl, not just sufficiently high tl.

Well, yes, as a special case of AIXI-tl's being unable to carry out
reasoning where their internal processes are correlated with the
environment.


Agreed...

(See, it IS actually possible to convince me of something, when it's
correct; I'm actually not *hopelessly* stubborn ;)


Yes, but it takes t2^l operations.

(Sorry, you didn't deserve it, but a straight line like that only comes 
along once.)

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
Ben,

 As Moshe pointed out to me, Marcus Hutter and his students tried to
 replicate Baum's work, with mixed results:

 go to

 http://www.idsia.ch/~marcus/

 click on Artificial Intelligence and scroll down to

 Market-Based Reinforcement Learning in Partially Observable Worlds (with I.
 Kwee  J. Schmidhuber)
 Proceedings of the 11th International Conference on Artificial Neural
 Networks (ICANN-2001) 865-873

Thanks.

  Despite the wonderful work of Eric Baum and others,
  developing really robust reinforcement learning is a
  really hard challenge. Which is why my estimate for
  the arrival of SI is 2100 rather than 2010 or 2020.
  I hope I'm wrong, because I want to meet a SI.

 Bill, I think that *thinking about* the AGI problem as a problem of
 developing really robust reinforcement learning is CORRECT but
 UNPRODUCTIVE.  I think that if you think about the problem as one of
 creating an integrated mind-system, and build the integrated mind-system,
 you will find that the robust reinforcement learning comes along due to
 coordinated emergent behaviors of various components.

In my book I say that consciousness is part of the way
the brain implements reinforcement learning, and I think
something like that is necessary for a really robust
solution. That's why I think it will take 100 years.

I try to give short answers, which sometimes gives the
mistaken impression that I think things are simple.

 So ultimately, I don't think that ultra-clever pure-reinforcement-learning
 schemes like Baum's are the road to AGI, although they may play a role.

 It wouldn't be the first time in the history of science that a problem
 looked close-to-impossible from one perspective, but became manageable via a
 perspective-shift.

I hope that when I say something will take 100 years,
that indicates that I think it is not straightforward
and will require a number of major conceptual leaps.

Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Breaking AIXI-tl

2003-02-15 Thread Ben Goertzel


   Anyway, a constant cave with an infinite tape seems like a constant
 challenge to me, and a finite cave that breaks any {AIXI-tl, tl-human}
 contest up to l=googlebyte also still seems interesting, especially as
 AIXI-tl is supposed to work for any tl, not just sufficiently high tl.

It's a fair mathematical challenge ... the reason I complained is that the
physical-world metaphor of a cave seems to me to imply a finite system.

A cave with an infinite tape in it is no longer a realizable physical
system!

  (See, it IS actually possible to convince me of something, when it's
  correct; I'm actually not *hopelessly* stubborn ;)

 Yes, but it takes t2^l operations.

 (Sorry, you didn't deserve it, but a straight line like that only comes
 along once.)

;-)


ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Alan Grimes
Eliezer S. Yudkowsky wrote:
 Let's imagine I'm a superintelligent magician, sitting in my castle, 
 Dyson Sphere, what-have-you.  I want to allow sentient beings some way 
 to visitme, but I'm tired of all these wandering AIXI-tl spambots that 
 script kiddies code up to brute-force my entrance challenges.  I don't 
 want to tl-bound my visitors; what if an actual sentient 10^10^15 
 ops/sec big wants to visit me?  I don't want to try and examine the 
 internal state of the visiting agent, either; that just starts a war of 
 camouflage between myself and the spammers.  Luckily, there's a simple 
 challenge I can pose to any visitor, cooperation with your clone, that 
 filters out the AIXI-tls and leaves only beings who are capable of a 
 certain level of reflectivity, presumably genuine sentients.  I don't 
 need to know the tl-bound of my visitors, or the tl-bound of the 
 AIXI-tl, in order to construct this challenge.  I write the code once.

Oh, that's trivial to break. I just put my AIXI-t1 (whatever that is) in
a human body and send it via rocket-ship... There would no way to clone
this being so you would have no way to carry out the test.

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Bill Hibbard
On Sat, 15 Feb 2003, Ben Goertzel wrote:

  In my book I say that consciousness is part of the way
  the brain implements reinforcement learning, and I think
  something like that is necessary for a really robust
  solution. That's why I think it will take 100 years.

 I would say, rather, that consciousness and reinforcement learning are BOTH
 consequences of having the right kind of integrative AGI system...

I look at it from the perspective of evolution of minds.
There is a simple reinforcement learning mechanism in
brains, as described in:

  HOW THE BASAL GANGLIA USE PARALLEL EXCITATORY AND INHIBITORY
  LEARNING PATHWAYS TO SELECTIVELY RESPOND TO UNEXPECTED REWARDING
  CUES. Brown, J., Bullock, D., and Grossberg, S. (1999). Journal
  of Neuroscience.
  on-line at http://cns-web.bu.edu/pub/diana/BroBulGro99.pdf

But this solves the credit assignment problem in only very
limited cases. I think that consciousness (and other things)
evolved in order to do reinforcement learning in more general
situations. That is, the need for robust reinforcement
learning provided the selection pressure for consciousness.

Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly Hibbard SIs

2003-02-15 Thread Ben Goertzel

 But this solves the credit assignment problem in only very
 limited cases. I think that consciousness (and other things)
 evolved in order to do reinforcement learning in more general
 situations. That is, the need for robust reinforcement
 learning provided the selection pressure for consciousness.

 Bill

Clearly, this is part of the story.

But I tend to be skeptical of neo-Darwinist just so stories when applied
to very complex systems.

I think that spontaneous self-organization (building on evolved and
self-organized components) is just as important.

Consciousness (i.e. intensive focused attention, as in human consciousness)
was presumably the a result of self-organization, of spontaneous emergence
between evolved subsystems that evolved for specific purposes.

Then, once it was there, it was reinforced by evolution because of its
usefulness in reinforcement learning and other things.

But consciousness also has many aspects that are not directly explicable by
evolutionary selection, in my view...

Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unbreakable data encryption

2003-02-15 Thread Daniel Colonnese

A few days ago this article came out:
http://www.israel21c.org/bin/en.jsp?enPage=BlankPageenDisplay=viewenDi
spWhat=objectenDispWho=Articles%5El306enZone=TechnologyenVersion=0

A company called Meganet is claiming that their VME encryption is
unbreakable and even offering prizes such as a Ferrari or $1m. to anyone
who could break into a VME-protected file.

Cool huh?


*
 Daniel Colonnese
 Computer Science Dept. NCSU
 2718 Clark Street
 Raleigh NC 27670
 Voice: (919) 451-3141
 Fax: (775) 361-4495
 http://www4.ncsu.edu:8030/~dcolonn/
* 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:
 In a naturalistic universe, where there is no sharp boundary between
 the physics of you and the physics of the rest of the world, the
 capability to invent new top-level internal reflective choices can be
 very important, pragmatically, in terms of properties of distant
 reality that directly correlate with your choice to your benefit, if
 there's any breakage at all of the Cartesian boundary - any
 correlation between your mindstate and the rest of the environment.

 Unless, you are vastly smarter than the rest of the universe.  Then you
 can proceed like an AIXItl and there is no need for top-level internal
 reflective choices ;)

Actually, even if you are vastly smarter than the rest of the entire 
universe, you may still be stuck dealing with lesser entities (though not 
humans; superintelligences at least) who have any information at all about 
your initial conditions, unless you can make top-level internal reflective 
choices.

The chance that environmental superintelligences will cooperate with you 
in PD situations may depend on *their* estimate of *your* ability to 
generalize over the choice to defect and realize that a similar temptation 
exists on both sides.  In other words, it takes a top-level internal 
reflective choice to adopt a cooperative ethic on the one-shot complex PD 
rather than blindly trying to predict and outwit the environment for 
maximum gain, which is built into the definition of AIXI-tl's control 
process.  A superintelligence may cooperate with a comparatively small, 
tl-bounded AI, but be unable to cooperate with an AIXI-tl, provided there 
is any inferrable information about initial conditions.  In one sense 
AIXI-tl wins; it always defects, which formally is a better choice 
than cooperating on the oneshot PD, regardless of what the opponent does - 
assuming that the environment is not correlated with your decisionmaking 
process.  But anyone who knows that assumption is built into AIXI-tl's 
initial conditions will always defect against AIXI-tl.  A small, 
tl-bounded AI that can make reflective choices has the capability of 
adopting a cooperative ethic; provided that both entities know or infer 
something about the other's initial conditions, they can arrive at a 
knowably correlated reflective choice to adopt cooperative ethics.

AIXI-tl can learn the iterated PD, of course; just not the oneshot complex PD.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] unbreakable data encryption

2003-02-15 Thread Cliff Stabbert
Saturday, February 15, 2003, 1:25:19 PM, Daniel Colonnese wrote:


DC A few days ago this article came out:
DC http://www.israel21c.org/bin/en.jsp?enPage=BlankPageenDisplay=viewenDi
DC spWhat=objectenDispWho=Articles%5El306enZone=TechnologyenVersion=0

DC A company called Meganet is claiming that their VME encryption is
DC unbreakable and even offering prizes such as a Ferrari or $1m. to anyone
DC who could break into a VME-protected file.

DC Cool huh?

On closer inspection, not really.  The company and its claims have
been around for a while.  (And the challenge has expired).

As to those claims, having recently been reading about Kolmogorov
complexity, Chaitin's omega and AIT I was immediately struck by this
quote from http://www.meganet.com/technology/intro.htm :
  The basis of VME is a Virtual Matrix, a matrix of binary values which
  is, in theory, infinite in size and therefore contains no redundant
  values. The data to be encrypted is compared to the data in the
  Virtual Matrix. Once a match is found, a set of pointers that indicate
  how to navigate inside the Virtual Matrix is created.

Infinite matrix, therefore no redundant values...uhuh...just figure
out the pointers -- sure.  It gets worse, much worse (the data never
gets sent! just some pointers! and we triple-encrypt those! etc.), and
Bruce Schneier has devoted some non-too-kind words to this and other
security companies' snake oil at
  http://www.counterpane.com/crypto-gram-9902.html

--
Cliff

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:

AIXI-tl can learn the iterated PD, of course; just not the
oneshot complex PD.


But if it's had the right prior experience, it may have an operating program
that is able to deal with the oneshot complex PD... ;-)


Ben, I'm not sure AIXI is capable of this.  AIXI may inexorably predict 
the environment and then inexorably try to maximize reward given 
environment.  The reflective realization that *your own choice* to follow 
that control procedure is correlated with a distant entity's choice not to 
cooperate with you may be beyond AIXI.  If it was the iterated PD, AIXI 
would learn how a defection fails to maximize reward over time.  But can 
AIXI understand, even in theory, regardless of what its internal programs 
simulate, that its top-level control function fails to maximize the a 
priori propensity of other minds with information about AIXI's internal 
state to cooperate with it, on the *one* shot PD?  AIXI can't take the 
action it needs to learn the utility of...

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] Breaking AIXI-tl

2003-02-15 Thread Ben Goertzel

I guess that for AIXI to learn this sort of thing, it would have to be
rewarded for understanding AIXI in general, for proving theorems about AIXI,
etc.  Once it had learned this, it might be able to apply this knowledge in
the one-shot PD context  But I am not sure.

ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Eliezer S. Yudkowsky
 Sent: Saturday, February 15, 2003 3:36 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Breaking AIXI-tl


 Ben Goertzel wrote:
 AIXI-tl can learn the iterated PD, of course; just not the
 oneshot complex PD.
 
  But if it's had the right prior experience, it may have an
 operating program
  that is able to deal with the oneshot complex PD... ;-)

 Ben, I'm not sure AIXI is capable of this.  AIXI may inexorably predict
 the environment and then inexorably try to maximize reward given
 environment.  The reflective realization that *your own choice* to follow
 that control procedure is correlated with a distant entity's
 choice not to
 cooperate with you may be beyond AIXI.  If it was the iterated PD, AIXI
 would learn how a defection fails to maximize reward over time.  But can
 AIXI understand, even in theory, regardless of what its internal programs
 simulate, that its top-level control function fails to maximize the a
 priori propensity of other minds with information about AIXI's internal
 state to cooperate with it, on the *one* shot PD?  AIXI can't take the
 action it needs to learn the utility of...

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Brad Wyble

 I guess that for AIXI to learn this sort of thing, it would have to be
 rewarded for understanding AIXI in general, for proving theorems about AIXI,
 etc.  Once it had learned this, it might be able to apply this knowledge in
 the one-shot PD context  But I am not sure.
 

For those of us who have missed a critical message or two in this weekend's lengthy 
exchange, can you explain briefly the one-shot complex PD?  I'm unsure how a program 
could evaluate and learn to predict the behavior of its opponent if it only gets 
1-shot.  Obviously I'm missing something.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AIXI and Solomonoff induction

2003-02-15 Thread Shane Legg

The other text book that I know is by Cristian S. Calude, the Prof. of
complexity theory that I studied under here in New Zealand.  A new
version of this book just recently came out.  Going by the last version,
the book will be somewhat more terse than the Li and Vitanyi book and
thus more appropriate for professional mathematicans who are used to
that sort of style.  The Li and Vitanyi book is also a lot broader in
its content thus for you I'd recommend the Li and Vitanyi book which
is without doubt THE book in the field, as James already pointed out.

There should be a new verson (third edition) of Li and Vitanyi sometime
this year which will be interesting.  Li and Vitanyi have also written
quite a few introductions to the basics of the field many of which you
should be able to find on the internet.

Cheers
Shane


The Li and Vitanyi book is actually intended to be a graduate-level text in
theoretical computer science (or so it says on the cover) and is formatted
like a math textbook.  It assumes little and pretty much starts from the
beginning of the field; you should have no problems accessing the content.

It is a well-written book, which is a good thing since it is sort of THE
text for the field with few other choices.

Cheers,

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AIXI and Solomonoff induction

2003-02-15 Thread Shane Legg

Hi Cliff,

Sorry about the delay... I've been out sailing watching the America's
Cup racing --- just a pity my team keeps losing to the damn Swiss! :(

Anyway:

Cliff Stabbert wrote:


SL This seems to be problematic to me.  For example, a random string
SL generated by coin flips is not compressible at all so would you
SL say that it's alive?

No, although it does take something living to flip the coins; but
presumably it's non-random (physically predictable by observing
externals) from the moment the coin has been flipped.  The decision to
call heads or tails however is not at all as *easily* physically
predictable, perhaps that's what I'm getting at.  But I understand
your point about compressibility (expanded below).


Well I could always build a machine to flip coins...  Or pulls lottery
balls out of a spinning drum for that matter.

Is such a thing predictable, at least in theory?  I have read about
this sort of thing before but to be perfectly honest I don't recall
the details... perhaps the Heisenberg principle makes it impossible
even in theory.  You would need to ask a quantum physicist I suppose.



more and more quickly: the tides are more predictable than the
behaviour of an ant, the ants are more predictable than a wolf, the
wolves are more predictable than a human in 800 B.C., and the human in
800 B.C. is more predictable than the human in 2003 A.D.

In that sense, Singularity Theory seems to be a statement of the
development of life's (Kolmogorov?) complexity over time.


Well it's hard to say actually.  An ant is less complex than a human,
but an ant really only makes sense in the context of the nest that it
belongs to and, if I remember correctly, the total neural mass of some
ants nests is about the same as that of a human brain.  Also whales
have much larger brains than humans and so are perhaps more complex
in some physical sense at least.

A lot of people in complexity believed that there was an evolutionary
pressure driving system to become more complex.  As far as I know there
aren't any particularly good results in this direction -- though I don't
exactly follow it much.

Cheers
Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]