Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread William Pearson
2008/6/21 Wei Dai [EMAIL PROTECTED]:
 A different way to break Solomonoff Induction takes advantage of the fact
 that it restricts Bayesian reasoning to computable models. I wrote about
 this in is induction unformalizable? [2] on the everything mailing list.
 Abram Demski also made similar points in recent posts on this mailing list.


I think this is a lot stronger objection when you actually implement
an implementable variant of Solomonoff Induction (it has started to
make me chuckle that a model of induction makes assumptions about the
universe that would have to be broken to have it implemented). When
you restrict the the memory space of a system a lot more functions
become uncomputable with respects to that system. It is not a safe
assumption that the world is computable in this restricted notion of
computable, i.e. computable with respect to a finite system.

Also solomonoff induction ignores any potential physical affects of
the computation, as does all probability theory. See section 5 of this
attempted paper by me of an formalised example of where things could
go wrong.

http://codesoup.sourceforge.net/easa.pdf

It is not quite an anthropic problem, but it is closely related.  I'll
tentatively label the observer-world interaction problem. That is the
exact nature of the world you see is altered dependent upon the type
of system you happen to be.

All these are problem with tacit (a la Dennet) representations of
beliefs embedded within the Solomonoff induction formalism.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Approximations of Knowledge

2008-06-21 Thread Jim Bromer
I just read Abram Demski's comments about Loosemore's,
Complex Systems, Artificial Intelligence and Theoretical
Psychology, at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html
I thought Abram's comments were interesting.  I just wanted to make a few 
criticisms. One
is that a logical or rational approach to AI does not necessarily mean that it
would be a fully constrained logical - mathematical method.  My point of view 
is that if you use a
logical or a rational method with an unconstrained inductive system (open and
not monotonic) then the logical system will, for any likely use, act like a
rational-non-rational system no matter what you do.  So when, I for example, 
start thinking about whether or not I
will be able to use my SAT system (logical satisfiability) for an AGI program,
I am not thinking of an implementation of a pure Aristotelian-Boolean system of
knowledge.  The system I am currently
considering would use logic to study theories and theory-like relations that
refer to concepts about the natural universe and the universe of thought, but
without the expectation that those theories could ever constitute a sound
strictly logical or rational model of everything.  Such ideas are so beyond the 
pale that I do not even consider the
possibility to be worthy of effort.  No
one in his right mind would seriously think that he could write a computer
program that could explain everything perfectly without error.  If anyone 
seriously talked like that I would
take it as a indication of some significant psychological problem.
 
I also take it as a given that AI would suffer from the
problem of computational irreducibility if it's design goals were to completely
comprehend all complexity using only logical methods in the strictest sense.
However, many complex ideas may be simplified and these simplifications can be
used wisely in specific circumstances.  My belief is that many interrelated 
layers of simplification, if they
are used insightfully, can effectively represent complex ideas that may not be
completely understood, just as we use insightful simplifications while trying
to discuss something that is completely understood, like intelligence.  My 
problem with developing an AI program is
not that I cannot figure out how to create complex systems of  insightful 
simplifications, but that I do
not know how to develop a computer program capable of sufficient complexity to
handle the load that the system would produce.  So while I agree with Demski's 
conclusion that, there is a way to
salvage Loosemore's position, ...[through] shortcutting an irreducible
computation by compromising, allowing the system to produce less-than-perfect
results, and, ...as we tackle harder problems, the methods must
become increasingly approximate, I do not agree that the contemporary
problem is with logic or with the complexity of human knowledge. I feel that
the major problem I have is that writing a really really complicated computer
program is really really difficult.
 
The problem I have with people who talk about ANNs or
probability nets as if their paradigm of choice were the inevitable solution to
complexity is that they never discuss how their approach might actually handle
complexity. Most advocates of ANNs or probability deal with the problem of
complexity as if it were a problem that either does not exist or has already
been solved by whatever tired paradigm they are advocating.  I don't get that.
 
The major problem I have is that writing a really really
complicated computer program is really really difficult.  But perhaps Abram's 
idea could be useful
here.  As the program has to deal with
more complicated collections of simple insights that concern some hard subject
matter, it could tend to rely more on approximations to manage those complexes 
of insight.

Jim Bromer



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread Abram Demski
Quick argument for the same point: AIXI is uncomputable, but only
considers computable models. The anthropic principle requires a
rational entity to include itself in all models that are given nonzero
probability. AIXI obviously cannot do so.

Such an argument fails for computable approximations of AIXI, however.
But they might fail for similar reasons. (Strict AIXI approximations
are approximations of an entity that can't reason about itself,
therefore any ability to do so is an artifact of the approximation.)

On Fri, Jun 20, 2008 at 8:09 PM, Wei Dai [EMAIL PROTECTED] wrote:
 Eliezer S. Yudkowsky pointed out in a 2003 agi post titled Breaking
 Solomonoff induction... well, not really [1] that
 Solomonoff Induction is flawed because it fails to incorporate anthropic
 reasoning. But apparently he thought this doesn't really matter because in
 the long run Solomonoff Induction will converge with the correct reasoning.
 Here I give two counterexamples to show that this convergence does not
 necessarily occur.

 The first example is a thought experiment where an induction/prediction
 machine is first given the following background information: Before
 predicting each new input symbol, it will be copied 9 times. Each copy will
 then receive the input 1, while the original will receive 0. The 9
 copies that received 1 will be put aside, while the original will be
 copied 9 more times before predicting the next symbol, and so on. To a human
 upload, or a machine capable of anthropic reasoning, this problem is
 simple: no matter how many 0s it sees, it should always predict 1 with
 probability 0.9, and 0 with probability 0.1. But with Solomonoff
 Induction, as the number of 0s it receives goes to infinity, the
 probability it predicts for 1 being the next input must converge to 0.

 In the second example, an intelligence wakes up with no previous memory and
 finds itself in an environment that apparently consists of a set of random
 integers and some of their factorizations. It finds that whenever it outputs
 a factorization for a previously unfactored number, it is rewarded. To a
 human upload, or a machine capable of anthropic reasoning, it would be
 immediately obvious that this cannot be the true environment, since such an
 environment is incapable of supporting an intelligence such as itself.
 Instead, a more likely explanation is that it is being used by another
 intelligence as a codebreaker. But Solomonoff Induction is incapable of
 reaching such a conclusion no matter how much time we give it, since it
 takes fewer bits to algorithmically describe just a set of random numbers
 and their factorizations, than such a set embedded within a universe capable
 of supporting intelligent life. (Note that I'm assuming that these numbers
 are truly random, for example generated using quantum coin flips.)

 A different way to break Solomonoff Induction takes advantage of the fact
 that it restricts Bayesian reasoning to computable models. I wrote about
 this in is induction unformalizable? [2] on the everything mailing list.
 Abram Demski also made similar points in recent posts on this mailing list.

 [1] http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
 [2]
 http://groups.google.com/group/everything-list/browse_frm/thread/c7442c13ff1396ec/804e134c70d4a203




 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-21 Thread Steve Richfield
Jim,

On 6/21/08, Jim Bromer [EMAIL PROTECTED] wrote:

   The major problem I have is that writing a really really complicated
 computer program is really really difficult.

The ONLY rational approach to this (that I know of) is to construct an
engine that develops and applies machine knowledge, wisdom, or whatever,
and NOT write code yourself that actually deals with articles of
knowledge/wisdom. That engine itself will still be a bit complex, so you
must write it in Visual Basic or .NET that provides a protected execution
environment, and NOT write it in C/C++ that makes it ever so easy to
inadvertently hide really nasty bugs.

REALLY complex systems may require multi-level interpreters, where a
low-level interpreter provides a pseudo-machine on which to program a really
smart high-level interpreter, on which you program your AGI. In ~1970 I
wrote an ALGOL/FORTRAN/BASIC compiler that ran in just 16K bytes this way.
At the bottom was a pseudo-computer whose primitives were fundamental to
compiling. That pseudo-machine was then fed a program to read BNF and make
compilers, which was then fed a BNF description of my compiler, with the
output being my compiler in pseudo-machine code. One feature of this
approach is that for anything to work, everything had to work, so once past
initial debugging, it worked perfectly! Contrast this with modern methods
that consume megabytes and never work quite right.

I wrote Dr, Eliza over the course of a year. I developed a daily workflow,
that started with answering my email while I woke up. Then came the most
creative work - module design. Then came programming, and finally came
debugging and testing. Obviously, you need a solid plan to start with to
complete such an effort. I spent another year developing my plan, an effort
that also involved going to computer conferences and bending the ear of
anyone who might have some applicable expertise. On a scale of complexity,
Dr. Eliza is MUCH simpler than many of the proposals being made here.
However, it does have one salient feature - it actually works in a
real-world useful way.

The more complex the software, the better the design must be, and the more
protected the execution must be. You can NEVER anticipate everything that
might go into a program, so they must fail ever so softly.

Much of what I have been challenging others on this form for came out of the
analysis and design of Dr. Eliza. The real world definitely has some
interesting structure, e.g. the figure 6 shape of cause-and-effect chains,
and that problems are a phenomenon that exists behind people's eyeballs and
NOT otherwise in the real world. Ignoring such things and diving in and
hoping that machine intelligence will resolve all (as many/most here seem to
believe) IMHO is a rookie error that leads nowhere useful.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-21 Thread Abram Demski
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 I just read Abram Demski's comments about Loosemore's, Complex Systems,
 Artificial Intelligence and Theoretical Psychology, at
 http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html

 I thought Abram's comments were interesting.  I just wanted to make a few
 criticisms. One is that a logical or rational approach to AI does not
 necessarily mean that it would be a fully constrained logical - mathematical
 method.  My point of view is that if you use a logical or a rational method
 with an unconstrained inductive system (open and not monotonic) then the
 logical system will, for any likely use, act like a rational-non-rational
 system no matter what you do.  So when, I for example, start thinking about
 whether or not I will be able to use my SAT system (logical satisfiability)
 for an AGI program, I am not thinking of an implementation of a pure
 Aristotelian-Boolean system of knowledge.  The system I am currently
 considering would use logic to study theories and theory-like relations that
 refer to concepts about the natural universe and the universe of thought,
 but without the expectation that those theories could ever constitute a
 sound strictly logical or rational model of everything.  Such ideas are so
 beyond the pale that I do not even consider the possibility to be worthy of
 effort.  No one in his right mind would seriously think that he could write
 a computer program that could explain everything perfectly without error.
 If anyone seriously talked like that I would take it as a indication of some
 significant psychological problem.



 I also take it as a given that AI would suffer from the problem of
 computational irreducibility if it's design goals were to completely
 comprehend all complexity using only logical methods in the strictest sense.
 However, many complex ideas may be simplified and these simplifications can
 be used wisely in specific circumstances.  My belief is that many
 interrelated layers of simplification, if they are used insightfully, can
 effectively represent complex ideas that may not be completely understood,
 just as we use insightful simplifications while trying to discuss something
 that is completely understood, like intelligence.  My problem with
 developing an AI program is not that I cannot figure out how to create
 complex systems of  insightful simplifications, but that I do not know how
 to develop a computer program capable of sufficient complexity to handle the
 load that the system would produce.  So while I agree with Demski's
 conclusion that, there is a way to salvage Loosemore's position,
 ...[through] shortcutting an irreducible computation by compromising,
 allowing the system to produce less-than-perfect results, and, ...as we
 tackle harder problems, the methods must become increasingly approximate, I
 do not agree that the contemporary problem is with logic or with the
 complexity of human knowledge. I feel that the major problem I have is that
 writing a really really complicated computer program is really really
 difficult.



 The problem I have with people who talk about ANNs or probability nets as if
 their paradigm of choice were the inevitable solution to complexity is that
 they never discuss how their approach might actually handle complexity. Most
 advocates of ANNs or probability deal with the problem of complexity as if
 it were a problem that either does not exist or has already been solved by
 whatever tired paradigm they are advocating.  I don't get that.



 The major problem I have is that writing a really really complicated
 computer program is really really difficult.  But perhaps Abram's idea could
 be useful here.  As the program has to deal with more complicated
 collections of simple insights that concern some hard subject matter, it
 could tend to rely more on approximations to manage those complexes of

Re: [agi] Approximations of Knowledge

2008-06-21 Thread Steve Richfield
Abram,

A useful midpoint between views is to decide what knowledge must distill
down to, to be able to relate it together and do whatever you want to do. I
did this with Dr. Eliza and realized that I had to have a column in my DB
that contained what people typically say to indicate the presence of various
symptoms (of various cause-and-effect chain links). I now realize that
ignorance of the operation of various processes itself is also a condition
with its own symptoms, each with their own common expressions of
ignorance. OK, so just where was my column going to come from? This
information is NOT on the Internet, Wikipedia, etc., yet any expert can
rattle this information off in a heartbeat. The only obvious answer was to
have experts hand code this information. I am STILL listening to anyone who
claims to have another/better way, but I have yet to hear ANY other
functional proposal. Of course, this simple realization dooms all of the
several efforts now underway to mine the Internet and Wikipedia for
knowledge from which to solve problems, yet no one seems to be interested in
this simple gotcha, while these doomed efforts continue.

I believe that ALL of the ongoing disputes here on this forum are born of a
lack of analysis. While the contents of a knowledge base may be very complex
and interrelated, the structure of that DB should be relatively simple. This
discussion should start with a proposal for structure, and continue as the
flaws in that proposal are each identified and addressed.

Note in passing that the value of any problem solving system lies in its
ability to solve problems with an absolute minimum of information. Hence,
systems that require the most information are worth the least, and systems
that require all information are completely worthless. Dr. Eliza was
designed to operate right at the (currently believed to be) absolute
minimum.

I completely agree with others here that Dr. Eliza is NOT an AGI as
currently envisioned. However, for many of the projected problem-solving
functions of a future AGI, it appears to be absolutely unbeatable. People
need to either target other functionality for a *useful* future AGI, or else
develop designs that won't be predictably inferior to Dr. Eliza. For this,
they would do well to fully understand the operation of Dr. Eliza, which
should be no problem since it is conceptually pretty simple. Most of the
code goes to support speech I/O, the USENET interface, etc., and NOT its
core problem solving ability.

Steve Richfield
===
On 6/21/08, Abram Demski [EMAIL PROTECTED] wrote:

 To be honest, I am not completely satisfied with my conclusion on the
 post you refer to. I'm not so sure now that the fundamental split
 between logical/messy methods should occur at the line between perfect
  approximate methods. This is one type of messiness, but one only. I
 think you are referring to a related but different messiness: not
 knowing what kind of environment your AI is dealing with. Since we
 don't know which kinds of models will fit best with the world, we
 should (1) trust our intuitions to some extent, and (2) try things and
 see how well they work. This is as Loosemore suggests.

 On the other hand, I do not want to agree with Loosemore too strongly.
 Mathematics and mathematical proof is a very important tool, and I
 feel like he wants to reject it. His image of an AGI seems to be a
 system built up out of totally dumb pieces, with intelligence emerging
 unexpectedly. Mine is a system built out of somewhat smart pieces,
 cooperating to build somewhat smarter pieces, and so on. Each piece
 has provable smarts.

 On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer [EMAIL PROTECTED] wrote:
  I just read Abram Demski's comments about Loosemore's, Complex Systems,
  Artificial Intelligence and Theoretical Psychology, at
 
 http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html
 
  I thought Abram's comments were interesting.  I just wanted to make a few
  criticisms. One is that a logical or rational approach to AI does not
  necessarily mean that it would be a fully constrained logical -
 mathematical
  method.  My point of view is that if you use a logical or a rational
 method
  with an unconstrained inductive system (open and not monotonic) then the
  logical system will, for any likely use, act like a rational-non-rational
  system no matter what you do.  So when, I for example, start thinking
 about
  whether or not I will be able to use my SAT system (logical
 satisfiability)
  for an AGI program, I am not thinking of an implementation of a pure
  Aristotelian-Boolean system of knowledge.  The system I am currently
  considering would use logic to study theories and theory-like relations
 that
  refer to concepts about the natural universe and the universe of thought,
  but without the expectation that those theories could ever constitute a
  sound strictly logical or rational model of everything.  Such ideas are
 so